Test Report: KVM_Linux_crio 19790

                    
                      b9d2e2c9658f87d0032c63e9ff5f9056e8d14f14:2024-10-14:36644
                    
                

Test fail (34/319)

Order failed test Duration
35 TestAddons/parallel/Ingress 152.34
37 TestAddons/parallel/MetricsServer 306.35
46 TestAddons/StoppedEnableDisable 154.21
147 TestFunctional/parallel/ImageCommands/ImageRemove 3.07
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.52
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.48
165 TestMultiControlPlane/serial/StopSecondaryNode 141.58
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 5.72
167 TestMultiControlPlane/serial/RestartSecondaryNode 6.3
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 6.39
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 416.57
172 TestMultiControlPlane/serial/StopCluster 142.07
232 TestMultiNode/serial/RestartKeepsNodes 326.19
234 TestMultiNode/serial/StopMultiNode 145.26
241 TestPreload 165.66
249 TestKubernetesUpgrade 401.17
321 TestStartStop/group/old-k8s-version/serial/FirstStart 301.07
346 TestStartStop/group/no-preload/serial/Stop 139.18
349 TestStartStop/group/embed-certs/serial/Stop 138.98
352 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.02
353 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
355 TestStartStop/group/old-k8s-version/serial/DeployApp 0.49
356 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 81.69
357 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
359 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
363 TestStartStop/group/old-k8s-version/serial/SecondStart 733.91
364 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.11
365 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.27
366 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.32
367 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.41
368 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 478.96
369 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 440.28
370 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 327.78
371 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 138.37
x
+
TestAddons/parallel/Ingress (152.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-313496 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-313496 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-313496 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [32314337-afcd-4dcd-9ee8-4d9c09bdfb5a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [32314337-afcd-4dcd-9ee8-4d9c09bdfb5a] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003845245s
I1014 13:42:08.436044   15023 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-313496 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-313496 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.132859021s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-313496 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-313496 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.177
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-313496 -n addons-313496
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-313496 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-313496 logs -n 25: (1.410255952s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 14 Oct 24 13:38 UTC | 14 Oct 24 13:38 UTC |
	| delete  | -p download-only-882366                                                                     | download-only-882366 | jenkins | v1.34.0 | 14 Oct 24 13:38 UTC | 14 Oct 24 13:38 UTC |
	| delete  | -p download-only-520840                                                                     | download-only-520840 | jenkins | v1.34.0 | 14 Oct 24 13:38 UTC | 14 Oct 24 13:38 UTC |
	| delete  | -p download-only-882366                                                                     | download-only-882366 | jenkins | v1.34.0 | 14 Oct 24 13:38 UTC | 14 Oct 24 13:38 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-011047 | jenkins | v1.34.0 | 14 Oct 24 13:38 UTC |                     |
	|         | binary-mirror-011047                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:35043                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-011047                                                                     | binary-mirror-011047 | jenkins | v1.34.0 | 14 Oct 24 13:38 UTC | 14 Oct 24 13:38 UTC |
	| addons  | disable dashboard -p                                                                        | addons-313496        | jenkins | v1.34.0 | 14 Oct 24 13:38 UTC |                     |
	|         | addons-313496                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-313496        | jenkins | v1.34.0 | 14 Oct 24 13:38 UTC |                     |
	|         | addons-313496                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-313496 --wait=true                                                                | addons-313496        | jenkins | v1.34.0 | 14 Oct 24 13:38 UTC | 14 Oct 24 13:41 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-313496 addons disable                                                                | addons-313496        | jenkins | v1.34.0 | 14 Oct 24 13:41 UTC | 14 Oct 24 13:41 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-313496 addons disable                                                                | addons-313496        | jenkins | v1.34.0 | 14 Oct 24 13:41 UTC | 14 Oct 24 13:41 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-313496        | jenkins | v1.34.0 | 14 Oct 24 13:41 UTC | 14 Oct 24 13:41 UTC |
	|         | -p addons-313496                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-313496 addons disable                                                                | addons-313496        | jenkins | v1.34.0 | 14 Oct 24 13:41 UTC | 14 Oct 24 13:41 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-313496 addons                                                                        | addons-313496        | jenkins | v1.34.0 | 14 Oct 24 13:41 UTC | 14 Oct 24 13:41 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-313496 addons disable                                                                | addons-313496        | jenkins | v1.34.0 | 14 Oct 24 13:41 UTC | 14 Oct 24 13:41 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-313496 ssh cat                                                                       | addons-313496        | jenkins | v1.34.0 | 14 Oct 24 13:41 UTC | 14 Oct 24 13:41 UTC |
	|         | /opt/local-path-provisioner/pvc-c19f89aa-af99-4f45-994e-6760df4750a7_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-313496 addons disable                                                                | addons-313496        | jenkins | v1.34.0 | 14 Oct 24 13:41 UTC | 14 Oct 24 13:42 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-313496 addons                                                                        | addons-313496        | jenkins | v1.34.0 | 14 Oct 24 13:41 UTC | 14 Oct 24 13:41 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-313496 ip                                                                            | addons-313496        | jenkins | v1.34.0 | 14 Oct 24 13:41 UTC | 14 Oct 24 13:41 UTC |
	| addons  | addons-313496 addons disable                                                                | addons-313496        | jenkins | v1.34.0 | 14 Oct 24 13:41 UTC | 14 Oct 24 13:41 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-313496 addons                                                                        | addons-313496        | jenkins | v1.34.0 | 14 Oct 24 13:41 UTC | 14 Oct 24 13:41 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-313496 ssh curl -s                                                                   | addons-313496        | jenkins | v1.34.0 | 14 Oct 24 13:42 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-313496 addons                                                                        | addons-313496        | jenkins | v1.34.0 | 14 Oct 24 13:42 UTC | 14 Oct 24 13:42 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-313496 addons                                                                        | addons-313496        | jenkins | v1.34.0 | 14 Oct 24 13:42 UTC | 14 Oct 24 13:42 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-313496 ip                                                                            | addons-313496        | jenkins | v1.34.0 | 14 Oct 24 13:44 UTC | 14 Oct 24 13:44 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 13:38:51
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 13:38:51.387253   15646 out.go:345] Setting OutFile to fd 1 ...
	I1014 13:38:51.387350   15646 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:38:51.387360   15646 out.go:358] Setting ErrFile to fd 2...
	I1014 13:38:51.387366   15646 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:38:51.387583   15646 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
	I1014 13:38:51.388245   15646 out.go:352] Setting JSON to false
	I1014 13:38:51.389067   15646 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1281,"bootTime":1728911850,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 13:38:51.389156   15646 start.go:139] virtualization: kvm guest
	I1014 13:38:51.391309   15646 out.go:177] * [addons-313496] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1014 13:38:51.392581   15646 notify.go:220] Checking for updates...
	I1014 13:38:51.392598   15646 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 13:38:51.393881   15646 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 13:38:51.395260   15646 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 13:38:51.396475   15646 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 13:38:51.397637   15646 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 13:38:51.398722   15646 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 13:38:51.399941   15646 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 13:38:51.430268   15646 out.go:177] * Using the kvm2 driver based on user configuration
	I1014 13:38:51.431512   15646 start.go:297] selected driver: kvm2
	I1014 13:38:51.431526   15646 start.go:901] validating driver "kvm2" against <nil>
	I1014 13:38:51.431539   15646 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 13:38:51.432245   15646 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 13:38:51.432329   15646 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19790-7836/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 13:38:51.446115   15646 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1014 13:38:51.446145   15646 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 13:38:51.446362   15646 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 13:38:51.446392   15646 cni.go:84] Creating CNI manager for ""
	I1014 13:38:51.446430   15646 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 13:38:51.446440   15646 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1014 13:38:51.446484   15646 start.go:340] cluster config:
	{Name:addons-313496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-313496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 13:38:51.446587   15646 iso.go:125] acquiring lock: {Name:mk2e2e780b05ead4007a93e6b56d28c6081926bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 13:38:51.448247   15646 out.go:177] * Starting "addons-313496" primary control-plane node in "addons-313496" cluster
	I1014 13:38:51.449451   15646 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 13:38:51.449473   15646 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1014 13:38:51.449481   15646 cache.go:56] Caching tarball of preloaded images
	I1014 13:38:51.449543   15646 preload.go:172] Found /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 13:38:51.449553   15646 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1014 13:38:51.449817   15646 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/config.json ...
	I1014 13:38:51.449834   15646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/config.json: {Name:mkf74f0baed126ca6fcf2a2185289a294d298977 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:38:51.449946   15646 start.go:360] acquireMachinesLock for addons-313496: {Name:mk0b928e3a7c606d1470b2064477a22c5e59969d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 13:38:51.449989   15646 start.go:364] duration metric: took 30.931µs to acquireMachinesLock for "addons-313496"
	I1014 13:38:51.450004   15646 start.go:93] Provisioning new machine with config: &{Name:addons-313496 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-313496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 13:38:51.450050   15646 start.go:125] createHost starting for "" (driver="kvm2")
	I1014 13:38:51.452262   15646 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1014 13:38:51.452363   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:38:51.452392   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:38:51.465595   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33901
	I1014 13:38:51.466026   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:38:51.466579   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:38:51.466610   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:38:51.466969   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:38:51.467183   15646 main.go:141] libmachine: (addons-313496) Calling .GetMachineName
	I1014 13:38:51.467347   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:38:51.467473   15646 start.go:159] libmachine.API.Create for "addons-313496" (driver="kvm2")
	I1014 13:38:51.467505   15646 client.go:168] LocalClient.Create starting
	I1014 13:38:51.467549   15646 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem
	I1014 13:38:51.739836   15646 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem
	I1014 13:38:51.939968   15646 main.go:141] libmachine: Running pre-create checks...
	I1014 13:38:51.939994   15646 main.go:141] libmachine: (addons-313496) Calling .PreCreateCheck
	I1014 13:38:51.940434   15646 main.go:141] libmachine: (addons-313496) Calling .GetConfigRaw
	I1014 13:38:51.940814   15646 main.go:141] libmachine: Creating machine...
	I1014 13:38:51.940828   15646 main.go:141] libmachine: (addons-313496) Calling .Create
	I1014 13:38:51.940964   15646 main.go:141] libmachine: (addons-313496) Creating KVM machine...
	I1014 13:38:51.942221   15646 main.go:141] libmachine: (addons-313496) DBG | found existing default KVM network
	I1014 13:38:51.942986   15646 main.go:141] libmachine: (addons-313496) DBG | I1014 13:38:51.942841   15668 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ba0}
	I1014 13:38:51.943016   15646 main.go:141] libmachine: (addons-313496) DBG | created network xml: 
	I1014 13:38:51.943030   15646 main.go:141] libmachine: (addons-313496) DBG | <network>
	I1014 13:38:51.943041   15646 main.go:141] libmachine: (addons-313496) DBG |   <name>mk-addons-313496</name>
	I1014 13:38:51.943054   15646 main.go:141] libmachine: (addons-313496) DBG |   <dns enable='no'/>
	I1014 13:38:51.943064   15646 main.go:141] libmachine: (addons-313496) DBG |   
	I1014 13:38:51.943077   15646 main.go:141] libmachine: (addons-313496) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1014 13:38:51.943091   15646 main.go:141] libmachine: (addons-313496) DBG |     <dhcp>
	I1014 13:38:51.943104   15646 main.go:141] libmachine: (addons-313496) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1014 13:38:51.943115   15646 main.go:141] libmachine: (addons-313496) DBG |     </dhcp>
	I1014 13:38:51.943126   15646 main.go:141] libmachine: (addons-313496) DBG |   </ip>
	I1014 13:38:51.943135   15646 main.go:141] libmachine: (addons-313496) DBG |   
	I1014 13:38:51.943145   15646 main.go:141] libmachine: (addons-313496) DBG | </network>
	I1014 13:38:51.943154   15646 main.go:141] libmachine: (addons-313496) DBG | 
	I1014 13:38:51.948712   15646 main.go:141] libmachine: (addons-313496) DBG | trying to create private KVM network mk-addons-313496 192.168.39.0/24...
	I1014 13:38:52.014654   15646 main.go:141] libmachine: (addons-313496) DBG | private KVM network mk-addons-313496 192.168.39.0/24 created
	I1014 13:38:52.014683   15646 main.go:141] libmachine: (addons-313496) DBG | I1014 13:38:52.014586   15668 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 13:38:52.014713   15646 main.go:141] libmachine: (addons-313496) Setting up store path in /home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496 ...
	I1014 13:38:52.014732   15646 main.go:141] libmachine: (addons-313496) Building disk image from file:///home/jenkins/minikube-integration/19790-7836/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1014 13:38:52.014753   15646 main.go:141] libmachine: (addons-313496) Downloading /home/jenkins/minikube-integration/19790-7836/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19790-7836/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1014 13:38:52.283526   15646 main.go:141] libmachine: (addons-313496) DBG | I1014 13:38:52.283416   15668 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa...
	I1014 13:38:52.335983   15646 main.go:141] libmachine: (addons-313496) DBG | I1014 13:38:52.335860   15668 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/addons-313496.rawdisk...
	I1014 13:38:52.336011   15646 main.go:141] libmachine: (addons-313496) DBG | Writing magic tar header
	I1014 13:38:52.336025   15646 main.go:141] libmachine: (addons-313496) DBG | Writing SSH key tar header
	I1014 13:38:52.336038   15646 main.go:141] libmachine: (addons-313496) DBG | I1014 13:38:52.336009   15668 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496 ...
	I1014 13:38:52.336165   15646 main.go:141] libmachine: (addons-313496) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496
	I1014 13:38:52.336200   15646 main.go:141] libmachine: (addons-313496) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube/machines
	I1014 13:38:52.336237   15646 main.go:141] libmachine: (addons-313496) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496 (perms=drwx------)
	I1014 13:38:52.336263   15646 main.go:141] libmachine: (addons-313496) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube/machines (perms=drwxr-xr-x)
	I1014 13:38:52.336279   15646 main.go:141] libmachine: (addons-313496) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 13:38:52.336297   15646 main.go:141] libmachine: (addons-313496) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836
	I1014 13:38:52.336305   15646 main.go:141] libmachine: (addons-313496) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1014 13:38:52.336312   15646 main.go:141] libmachine: (addons-313496) DBG | Checking permissions on dir: /home/jenkins
	I1014 13:38:52.336319   15646 main.go:141] libmachine: (addons-313496) DBG | Checking permissions on dir: /home
	I1014 13:38:52.336327   15646 main.go:141] libmachine: (addons-313496) DBG | Skipping /home - not owner
	I1014 13:38:52.336344   15646 main.go:141] libmachine: (addons-313496) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube (perms=drwxr-xr-x)
	I1014 13:38:52.336365   15646 main.go:141] libmachine: (addons-313496) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836 (perms=drwxrwxr-x)
	I1014 13:38:52.336376   15646 main.go:141] libmachine: (addons-313496) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1014 13:38:52.336384   15646 main.go:141] libmachine: (addons-313496) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1014 13:38:52.336396   15646 main.go:141] libmachine: (addons-313496) Creating domain...
	I1014 13:38:52.337426   15646 main.go:141] libmachine: (addons-313496) define libvirt domain using xml: 
	I1014 13:38:52.337465   15646 main.go:141] libmachine: (addons-313496) <domain type='kvm'>
	I1014 13:38:52.337475   15646 main.go:141] libmachine: (addons-313496)   <name>addons-313496</name>
	I1014 13:38:52.337488   15646 main.go:141] libmachine: (addons-313496)   <memory unit='MiB'>4000</memory>
	I1014 13:38:52.337494   15646 main.go:141] libmachine: (addons-313496)   <vcpu>2</vcpu>
	I1014 13:38:52.337505   15646 main.go:141] libmachine: (addons-313496)   <features>
	I1014 13:38:52.337518   15646 main.go:141] libmachine: (addons-313496)     <acpi/>
	I1014 13:38:52.337526   15646 main.go:141] libmachine: (addons-313496)     <apic/>
	I1014 13:38:52.337532   15646 main.go:141] libmachine: (addons-313496)     <pae/>
	I1014 13:38:52.337539   15646 main.go:141] libmachine: (addons-313496)     
	I1014 13:38:52.337545   15646 main.go:141] libmachine: (addons-313496)   </features>
	I1014 13:38:52.337553   15646 main.go:141] libmachine: (addons-313496)   <cpu mode='host-passthrough'>
	I1014 13:38:52.337559   15646 main.go:141] libmachine: (addons-313496)   
	I1014 13:38:52.337568   15646 main.go:141] libmachine: (addons-313496)   </cpu>
	I1014 13:38:52.337574   15646 main.go:141] libmachine: (addons-313496)   <os>
	I1014 13:38:52.337587   15646 main.go:141] libmachine: (addons-313496)     <type>hvm</type>
	I1014 13:38:52.337613   15646 main.go:141] libmachine: (addons-313496)     <boot dev='cdrom'/>
	I1014 13:38:52.337634   15646 main.go:141] libmachine: (addons-313496)     <boot dev='hd'/>
	I1014 13:38:52.337653   15646 main.go:141] libmachine: (addons-313496)     <bootmenu enable='no'/>
	I1014 13:38:52.337661   15646 main.go:141] libmachine: (addons-313496)   </os>
	I1014 13:38:52.337670   15646 main.go:141] libmachine: (addons-313496)   <devices>
	I1014 13:38:52.337682   15646 main.go:141] libmachine: (addons-313496)     <disk type='file' device='cdrom'>
	I1014 13:38:52.337699   15646 main.go:141] libmachine: (addons-313496)       <source file='/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/boot2docker.iso'/>
	I1014 13:38:52.337711   15646 main.go:141] libmachine: (addons-313496)       <target dev='hdc' bus='scsi'/>
	I1014 13:38:52.337721   15646 main.go:141] libmachine: (addons-313496)       <readonly/>
	I1014 13:38:52.337731   15646 main.go:141] libmachine: (addons-313496)     </disk>
	I1014 13:38:52.337741   15646 main.go:141] libmachine: (addons-313496)     <disk type='file' device='disk'>
	I1014 13:38:52.337753   15646 main.go:141] libmachine: (addons-313496)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1014 13:38:52.337769   15646 main.go:141] libmachine: (addons-313496)       <source file='/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/addons-313496.rawdisk'/>
	I1014 13:38:52.337784   15646 main.go:141] libmachine: (addons-313496)       <target dev='hda' bus='virtio'/>
	I1014 13:38:52.337792   15646 main.go:141] libmachine: (addons-313496)     </disk>
	I1014 13:38:52.337805   15646 main.go:141] libmachine: (addons-313496)     <interface type='network'>
	I1014 13:38:52.337816   15646 main.go:141] libmachine: (addons-313496)       <source network='mk-addons-313496'/>
	I1014 13:38:52.337842   15646 main.go:141] libmachine: (addons-313496)       <model type='virtio'/>
	I1014 13:38:52.337861   15646 main.go:141] libmachine: (addons-313496)     </interface>
	I1014 13:38:52.337869   15646 main.go:141] libmachine: (addons-313496)     <interface type='network'>
	I1014 13:38:52.337874   15646 main.go:141] libmachine: (addons-313496)       <source network='default'/>
	I1014 13:38:52.337881   15646 main.go:141] libmachine: (addons-313496)       <model type='virtio'/>
	I1014 13:38:52.337884   15646 main.go:141] libmachine: (addons-313496)     </interface>
	I1014 13:38:52.337893   15646 main.go:141] libmachine: (addons-313496)     <serial type='pty'>
	I1014 13:38:52.337907   15646 main.go:141] libmachine: (addons-313496)       <target port='0'/>
	I1014 13:38:52.337919   15646 main.go:141] libmachine: (addons-313496)     </serial>
	I1014 13:38:52.337928   15646 main.go:141] libmachine: (addons-313496)     <console type='pty'>
	I1014 13:38:52.337954   15646 main.go:141] libmachine: (addons-313496)       <target type='serial' port='0'/>
	I1014 13:38:52.337963   15646 main.go:141] libmachine: (addons-313496)     </console>
	I1014 13:38:52.337969   15646 main.go:141] libmachine: (addons-313496)     <rng model='virtio'>
	I1014 13:38:52.337984   15646 main.go:141] libmachine: (addons-313496)       <backend model='random'>/dev/random</backend>
	I1014 13:38:52.338007   15646 main.go:141] libmachine: (addons-313496)     </rng>
	I1014 13:38:52.338019   15646 main.go:141] libmachine: (addons-313496)     
	I1014 13:38:52.338029   15646 main.go:141] libmachine: (addons-313496)     
	I1014 13:38:52.338047   15646 main.go:141] libmachine: (addons-313496)   </devices>
	I1014 13:38:52.338055   15646 main.go:141] libmachine: (addons-313496) </domain>
	I1014 13:38:52.338079   15646 main.go:141] libmachine: (addons-313496) 
	I1014 13:38:52.343729   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:64:3a:5d in network default
	I1014 13:38:52.344237   15646 main.go:141] libmachine: (addons-313496) Ensuring networks are active...
	I1014 13:38:52.344257   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:38:52.344885   15646 main.go:141] libmachine: (addons-313496) Ensuring network default is active
	I1014 13:38:52.345151   15646 main.go:141] libmachine: (addons-313496) Ensuring network mk-addons-313496 is active
	I1014 13:38:52.345656   15646 main.go:141] libmachine: (addons-313496) Getting domain xml...
	I1014 13:38:52.346296   15646 main.go:141] libmachine: (addons-313496) Creating domain...
	I1014 13:38:53.745757   15646 main.go:141] libmachine: (addons-313496) Waiting to get IP...
	I1014 13:38:53.746553   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:38:53.746898   15646 main.go:141] libmachine: (addons-313496) DBG | unable to find current IP address of domain addons-313496 in network mk-addons-313496
	I1014 13:38:53.746916   15646 main.go:141] libmachine: (addons-313496) DBG | I1014 13:38:53.746878   15668 retry.go:31] will retry after 215.074025ms: waiting for machine to come up
	I1014 13:38:53.963342   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:38:53.963813   15646 main.go:141] libmachine: (addons-313496) DBG | unable to find current IP address of domain addons-313496 in network mk-addons-313496
	I1014 13:38:53.963845   15646 main.go:141] libmachine: (addons-313496) DBG | I1014 13:38:53.963781   15668 retry.go:31] will retry after 295.378447ms: waiting for machine to come up
	I1014 13:38:54.260376   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:38:54.260812   15646 main.go:141] libmachine: (addons-313496) DBG | unable to find current IP address of domain addons-313496 in network mk-addons-313496
	I1014 13:38:54.260845   15646 main.go:141] libmachine: (addons-313496) DBG | I1014 13:38:54.260786   15668 retry.go:31] will retry after 389.386084ms: waiting for machine to come up
	I1014 13:38:54.651296   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:38:54.651725   15646 main.go:141] libmachine: (addons-313496) DBG | unable to find current IP address of domain addons-313496 in network mk-addons-313496
	I1014 13:38:54.651751   15646 main.go:141] libmachine: (addons-313496) DBG | I1014 13:38:54.651668   15668 retry.go:31] will retry after 440.219356ms: waiting for machine to come up
	I1014 13:38:55.093378   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:38:55.093712   15646 main.go:141] libmachine: (addons-313496) DBG | unable to find current IP address of domain addons-313496 in network mk-addons-313496
	I1014 13:38:55.093748   15646 main.go:141] libmachine: (addons-313496) DBG | I1014 13:38:55.093685   15668 retry.go:31] will retry after 607.945898ms: waiting for machine to come up
	I1014 13:38:55.703764   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:38:55.704295   15646 main.go:141] libmachine: (addons-313496) DBG | unable to find current IP address of domain addons-313496 in network mk-addons-313496
	I1014 13:38:55.704323   15646 main.go:141] libmachine: (addons-313496) DBG | I1014 13:38:55.704247   15668 retry.go:31] will retry after 629.470004ms: waiting for machine to come up
	I1014 13:38:56.335240   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:38:56.335665   15646 main.go:141] libmachine: (addons-313496) DBG | unable to find current IP address of domain addons-313496 in network mk-addons-313496
	I1014 13:38:56.335689   15646 main.go:141] libmachine: (addons-313496) DBG | I1014 13:38:56.335607   15668 retry.go:31] will retry after 1.050110581s: waiting for machine to come up
	I1014 13:38:57.387517   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:38:57.387918   15646 main.go:141] libmachine: (addons-313496) DBG | unable to find current IP address of domain addons-313496 in network mk-addons-313496
	I1014 13:38:57.387939   15646 main.go:141] libmachine: (addons-313496) DBG | I1014 13:38:57.387892   15668 retry.go:31] will retry after 1.397516625s: waiting for machine to come up
	I1014 13:38:58.787515   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:38:58.787928   15646 main.go:141] libmachine: (addons-313496) DBG | unable to find current IP address of domain addons-313496 in network mk-addons-313496
	I1014 13:38:58.787957   15646 main.go:141] libmachine: (addons-313496) DBG | I1014 13:38:58.787880   15668 retry.go:31] will retry after 1.564506642s: waiting for machine to come up
	I1014 13:39:00.353577   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:00.354008   15646 main.go:141] libmachine: (addons-313496) DBG | unable to find current IP address of domain addons-313496 in network mk-addons-313496
	I1014 13:39:00.354023   15646 main.go:141] libmachine: (addons-313496) DBG | I1014 13:39:00.353983   15668 retry.go:31] will retry after 1.737801278s: waiting for machine to come up
	I1014 13:39:02.093401   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:02.093986   15646 main.go:141] libmachine: (addons-313496) DBG | unable to find current IP address of domain addons-313496 in network mk-addons-313496
	I1014 13:39:02.094016   15646 main.go:141] libmachine: (addons-313496) DBG | I1014 13:39:02.093940   15668 retry.go:31] will retry after 2.025246342s: waiting for machine to come up
	I1014 13:39:04.122150   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:04.122572   15646 main.go:141] libmachine: (addons-313496) DBG | unable to find current IP address of domain addons-313496 in network mk-addons-313496
	I1014 13:39:04.122592   15646 main.go:141] libmachine: (addons-313496) DBG | I1014 13:39:04.122532   15668 retry.go:31] will retry after 3.273652956s: waiting for machine to come up
	I1014 13:39:07.398000   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:07.398455   15646 main.go:141] libmachine: (addons-313496) DBG | unable to find current IP address of domain addons-313496 in network mk-addons-313496
	I1014 13:39:07.398475   15646 main.go:141] libmachine: (addons-313496) DBG | I1014 13:39:07.398419   15668 retry.go:31] will retry after 4.219441467s: waiting for machine to come up
	I1014 13:39:11.619652   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:11.620095   15646 main.go:141] libmachine: (addons-313496) DBG | unable to find current IP address of domain addons-313496 in network mk-addons-313496
	I1014 13:39:11.620123   15646 main.go:141] libmachine: (addons-313496) DBG | I1014 13:39:11.620067   15668 retry.go:31] will retry after 4.721306555s: waiting for machine to come up
	I1014 13:39:16.342673   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:16.343018   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has current primary IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:16.343048   15646 main.go:141] libmachine: (addons-313496) Found IP for machine: 192.168.39.177
	I1014 13:39:16.343066   15646 main.go:141] libmachine: (addons-313496) Reserving static IP address...
	I1014 13:39:16.343413   15646 main.go:141] libmachine: (addons-313496) DBG | unable to find host DHCP lease matching {name: "addons-313496", mac: "52:54:00:12:ec:ab", ip: "192.168.39.177"} in network mk-addons-313496
	I1014 13:39:16.411345   15646 main.go:141] libmachine: (addons-313496) DBG | Getting to WaitForSSH function...
	I1014 13:39:16.411384   15646 main.go:141] libmachine: (addons-313496) Reserved static IP address: 192.168.39.177
	I1014 13:39:16.411396   15646 main.go:141] libmachine: (addons-313496) Waiting for SSH to be available...
	I1014 13:39:16.413855   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:16.414127   15646 main.go:141] libmachine: (addons-313496) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496
	I1014 13:39:16.414153   15646 main.go:141] libmachine: (addons-313496) DBG | unable to find defined IP address of network mk-addons-313496 interface with MAC address 52:54:00:12:ec:ab
	I1014 13:39:16.414317   15646 main.go:141] libmachine: (addons-313496) DBG | Using SSH client type: external
	I1014 13:39:16.414339   15646 main.go:141] libmachine: (addons-313496) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa (-rw-------)
	I1014 13:39:16.414400   15646 main.go:141] libmachine: (addons-313496) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 13:39:16.414432   15646 main.go:141] libmachine: (addons-313496) DBG | About to run SSH command:
	I1014 13:39:16.414462   15646 main.go:141] libmachine: (addons-313496) DBG | exit 0
	I1014 13:39:16.426016   15646 main.go:141] libmachine: (addons-313496) DBG | SSH cmd err, output: exit status 255: 
	I1014 13:39:16.426031   15646 main.go:141] libmachine: (addons-313496) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1014 13:39:16.426037   15646 main.go:141] libmachine: (addons-313496) DBG | command : exit 0
	I1014 13:39:16.426048   15646 main.go:141] libmachine: (addons-313496) DBG | err     : exit status 255
	I1014 13:39:16.426058   15646 main.go:141] libmachine: (addons-313496) DBG | output  : 
	I1014 13:39:19.428604   15646 main.go:141] libmachine: (addons-313496) DBG | Getting to WaitForSSH function...
	I1014 13:39:19.430754   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:19.431133   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:19.431167   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:19.431269   15646 main.go:141] libmachine: (addons-313496) DBG | Using SSH client type: external
	I1014 13:39:19.431296   15646 main.go:141] libmachine: (addons-313496) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa (-rw-------)
	I1014 13:39:19.431329   15646 main.go:141] libmachine: (addons-313496) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.177 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 13:39:19.431344   15646 main.go:141] libmachine: (addons-313496) DBG | About to run SSH command:
	I1014 13:39:19.431356   15646 main.go:141] libmachine: (addons-313496) DBG | exit 0
	I1014 13:39:19.558866   15646 main.go:141] libmachine: (addons-313496) DBG | SSH cmd err, output: <nil>: 
	I1014 13:39:19.559125   15646 main.go:141] libmachine: (addons-313496) KVM machine creation complete!
	I1014 13:39:19.559429   15646 main.go:141] libmachine: (addons-313496) Calling .GetConfigRaw
	I1014 13:39:19.559949   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:19.560110   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:19.560283   15646 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1014 13:39:19.560295   15646 main.go:141] libmachine: (addons-313496) Calling .GetState
	I1014 13:39:19.561460   15646 main.go:141] libmachine: Detecting operating system of created instance...
	I1014 13:39:19.561473   15646 main.go:141] libmachine: Waiting for SSH to be available...
	I1014 13:39:19.561478   15646 main.go:141] libmachine: Getting to WaitForSSH function...
	I1014 13:39:19.561483   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:19.563511   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:19.563806   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:19.563830   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:19.563948   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:19.564113   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:19.564250   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:19.564354   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:19.564516   15646 main.go:141] libmachine: Using SSH client type: native
	I1014 13:39:19.564703   15646 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I1014 13:39:19.564716   15646 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1014 13:39:19.670020   15646 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 13:39:19.670042   15646 main.go:141] libmachine: Detecting the provisioner...
	I1014 13:39:19.670049   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:19.672866   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:19.673182   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:19.673204   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:19.673386   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:19.673579   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:19.673765   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:19.673918   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:19.674079   15646 main.go:141] libmachine: Using SSH client type: native
	I1014 13:39:19.674229   15646 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I1014 13:39:19.674239   15646 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1014 13:39:19.783097   15646 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1014 13:39:19.783161   15646 main.go:141] libmachine: found compatible host: buildroot
	I1014 13:39:19.783173   15646 main.go:141] libmachine: Provisioning with buildroot...
	I1014 13:39:19.783183   15646 main.go:141] libmachine: (addons-313496) Calling .GetMachineName
	I1014 13:39:19.783443   15646 buildroot.go:166] provisioning hostname "addons-313496"
	I1014 13:39:19.783468   15646 main.go:141] libmachine: (addons-313496) Calling .GetMachineName
	I1014 13:39:19.783648   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:19.786172   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:19.786540   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:19.786561   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:19.786727   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:19.786891   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:19.787059   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:19.787203   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:19.787352   15646 main.go:141] libmachine: Using SSH client type: native
	I1014 13:39:19.787501   15646 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I1014 13:39:19.787512   15646 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-313496 && echo "addons-313496" | sudo tee /etc/hostname
	I1014 13:39:19.910742   15646 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-313496
	
	I1014 13:39:19.910773   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:19.913209   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:19.913514   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:19.913541   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:19.913674   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:19.913846   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:19.913982   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:19.914156   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:19.914305   15646 main.go:141] libmachine: Using SSH client type: native
	I1014 13:39:19.914460   15646 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I1014 13:39:19.914475   15646 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-313496' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-313496/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-313496' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 13:39:20.031821   15646 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 13:39:20.031848   15646 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 13:39:20.031884   15646 buildroot.go:174] setting up certificates
	I1014 13:39:20.031894   15646 provision.go:84] configureAuth start
	I1014 13:39:20.031904   15646 main.go:141] libmachine: (addons-313496) Calling .GetMachineName
	I1014 13:39:20.032129   15646 main.go:141] libmachine: (addons-313496) Calling .GetIP
	I1014 13:39:20.034872   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.035250   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:20.035277   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.035388   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:20.037455   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.037752   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:20.037786   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.037931   15646 provision.go:143] copyHostCerts
	I1014 13:39:20.037997   15646 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 13:39:20.038152   15646 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 13:39:20.038211   15646 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 13:39:20.038257   15646 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.addons-313496 san=[127.0.0.1 192.168.39.177 addons-313496 localhost minikube]
	I1014 13:39:20.196030   15646 provision.go:177] copyRemoteCerts
	I1014 13:39:20.196081   15646 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 13:39:20.196107   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:20.198559   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.198800   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:20.198825   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.199004   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:20.199166   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:20.199316   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:20.199420   15646 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa Username:docker}
	I1014 13:39:20.285289   15646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 13:39:20.313491   15646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 13:39:20.340877   15646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1014 13:39:20.368021   15646 provision.go:87] duration metric: took 336.112374ms to configureAuth
	I1014 13:39:20.368047   15646 buildroot.go:189] setting minikube options for container-runtime
	I1014 13:39:20.368244   15646 config.go:182] Loaded profile config "addons-313496": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:39:20.368324   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:20.370802   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.371140   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:20.371168   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.371306   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:20.371479   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:20.371637   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:20.371752   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:20.371896   15646 main.go:141] libmachine: Using SSH client type: native
	I1014 13:39:20.372061   15646 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I1014 13:39:20.372074   15646 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 13:39:20.598613   15646 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 13:39:20.598645   15646 main.go:141] libmachine: Checking connection to Docker...
	I1014 13:39:20.598671   15646 main.go:141] libmachine: (addons-313496) Calling .GetURL
	I1014 13:39:20.599851   15646 main.go:141] libmachine: (addons-313496) DBG | Using libvirt version 6000000
	I1014 13:39:20.601952   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.602271   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:20.602301   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.602460   15646 main.go:141] libmachine: Docker is up and running!
	I1014 13:39:20.602478   15646 main.go:141] libmachine: Reticulating splines...
	I1014 13:39:20.602486   15646 client.go:171] duration metric: took 29.134969553s to LocalClient.Create
	I1014 13:39:20.602509   15646 start.go:167] duration metric: took 29.135036656s to libmachine.API.Create "addons-313496"
	I1014 13:39:20.602519   15646 start.go:293] postStartSetup for "addons-313496" (driver="kvm2")
	I1014 13:39:20.602528   15646 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 13:39:20.602544   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:20.602776   15646 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 13:39:20.602800   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:20.604658   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.604966   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:20.604991   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.605087   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:20.605265   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:20.605445   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:20.605553   15646 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa Username:docker}
	I1014 13:39:20.688481   15646 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 13:39:20.692697   15646 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 13:39:20.692727   15646 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 13:39:20.692803   15646 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 13:39:20.692830   15646 start.go:296] duration metric: took 90.306444ms for postStartSetup
	I1014 13:39:20.692862   15646 main.go:141] libmachine: (addons-313496) Calling .GetConfigRaw
	I1014 13:39:20.693442   15646 main.go:141] libmachine: (addons-313496) Calling .GetIP
	I1014 13:39:20.695881   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.696139   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:20.696168   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.696443   15646 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/config.json ...
	I1014 13:39:20.696616   15646 start.go:128] duration metric: took 29.246557136s to createHost
	I1014 13:39:20.696638   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:20.698700   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.698996   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:20.699026   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.699192   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:20.699369   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:20.699489   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:20.699613   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:20.699745   15646 main.go:141] libmachine: Using SSH client type: native
	I1014 13:39:20.699898   15646 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I1014 13:39:20.699907   15646 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 13:39:20.807573   15646 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728913160.787783114
	
	I1014 13:39:20.807603   15646 fix.go:216] guest clock: 1728913160.787783114
	I1014 13:39:20.807614   15646 fix.go:229] Guest: 2024-10-14 13:39:20.787783114 +0000 UTC Remote: 2024-10-14 13:39:20.696625309 +0000 UTC m=+29.345353748 (delta=91.157805ms)
	I1014 13:39:20.807672   15646 fix.go:200] guest clock delta is within tolerance: 91.157805ms
	I1014 13:39:20.807682   15646 start.go:83] releasing machines lock for "addons-313496", held for 29.35768389s
	I1014 13:39:20.807709   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:20.807972   15646 main.go:141] libmachine: (addons-313496) Calling .GetIP
	I1014 13:39:20.811323   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.811742   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:20.811773   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.811978   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:20.812384   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:20.812516   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:20.812579   15646 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 13:39:20.812633   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:20.812683   15646 ssh_runner.go:195] Run: cat /version.json
	I1014 13:39:20.812702   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:20.815092   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.815186   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.815467   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:20.815491   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.815553   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:20.815590   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:20.815590   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.815771   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:20.815781   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:20.815923   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:20.815933   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:20.816070   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:20.816151   15646 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa Username:docker}
	I1014 13:39:20.816169   15646 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa Username:docker}
	I1014 13:39:20.919625   15646 ssh_runner.go:195] Run: systemctl --version
	I1014 13:39:20.926280   15646 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 13:39:21.088801   15646 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 13:39:21.095670   15646 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 13:39:21.095743   15646 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 13:39:21.111973   15646 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 13:39:21.112006   15646 start.go:495] detecting cgroup driver to use...
	I1014 13:39:21.112069   15646 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 13:39:21.127345   15646 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 13:39:21.140741   15646 docker.go:217] disabling cri-docker service (if available) ...
	I1014 13:39:21.140791   15646 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 13:39:21.153561   15646 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 13:39:21.167046   15646 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 13:39:21.276406   15646 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 13:39:21.441005   15646 docker.go:233] disabling docker service ...
	I1014 13:39:21.441084   15646 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 13:39:21.455334   15646 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 13:39:21.468467   15646 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 13:39:21.578055   15646 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 13:39:21.692980   15646 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 13:39:21.707977   15646 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 13:39:21.726866   15646 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 13:39:21.726927   15646 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:39:21.737978   15646 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 13:39:21.738047   15646 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:39:21.748930   15646 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:39:21.759522   15646 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:39:21.770335   15646 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 13:39:21.781479   15646 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:39:21.792499   15646 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:39:21.810247   15646 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:39:21.820885   15646 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 13:39:21.830938   15646 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 13:39:21.830989   15646 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 13:39:21.843876   15646 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 13:39:21.853716   15646 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:39:21.972678   15646 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 13:39:22.067345   15646 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 13:39:22.067431   15646 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 13:39:22.072339   15646 start.go:563] Will wait 60s for crictl version
	I1014 13:39:22.072531   15646 ssh_runner.go:195] Run: which crictl
	I1014 13:39:22.076529   15646 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 13:39:22.115507   15646 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 13:39:22.115632   15646 ssh_runner.go:195] Run: crio --version
	I1014 13:39:22.144532   15646 ssh_runner.go:195] Run: crio --version
	I1014 13:39:22.173534   15646 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1014 13:39:22.174835   15646 main.go:141] libmachine: (addons-313496) Calling .GetIP
	I1014 13:39:22.177082   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:22.177408   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:22.177427   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:22.177621   15646 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1014 13:39:22.181621   15646 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 13:39:22.193930   15646 kubeadm.go:883] updating cluster {Name:addons-313496 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-313496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.177 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 13:39:22.194056   15646 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 13:39:22.194109   15646 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 13:39:22.224947   15646 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1014 13:39:22.225026   15646 ssh_runner.go:195] Run: which lz4
	I1014 13:39:22.229066   15646 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 13:39:22.233200   15646 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 13:39:22.233221   15646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1014 13:39:23.512534   15646 crio.go:462] duration metric: took 1.28349036s to copy over tarball
	I1014 13:39:23.512611   15646 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 13:39:25.711270   15646 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.198623226s)
	I1014 13:39:25.711303   15646 crio.go:469] duration metric: took 2.198741311s to extract the tarball
	I1014 13:39:25.711310   15646 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 13:39:25.747940   15646 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 13:39:25.791900   15646 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 13:39:25.791923   15646 cache_images.go:84] Images are preloaded, skipping loading
	I1014 13:39:25.791941   15646 kubeadm.go:934] updating node { 192.168.39.177 8443 v1.31.1 crio true true} ...
	I1014 13:39:25.792024   15646 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-313496 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.177
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-313496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 13:39:25.792083   15646 ssh_runner.go:195] Run: crio config
	I1014 13:39:25.844006   15646 cni.go:84] Creating CNI manager for ""
	I1014 13:39:25.844029   15646 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 13:39:25.844039   15646 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 13:39:25.844060   15646 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.177 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-313496 NodeName:addons-313496 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.177"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.177 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 13:39:25.844222   15646 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.177
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-313496"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.177"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.177"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 13:39:25.844290   15646 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 13:39:25.854212   15646 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 13:39:25.854278   15646 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 13:39:25.863717   15646 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1014 13:39:25.879968   15646 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 13:39:25.899824   15646 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1014 13:39:25.917120   15646 ssh_runner.go:195] Run: grep 192.168.39.177	control-plane.minikube.internal$ /etc/hosts
	I1014 13:39:25.921090   15646 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.177	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 13:39:25.934049   15646 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:39:26.052990   15646 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 13:39:26.069049   15646 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496 for IP: 192.168.39.177
	I1014 13:39:26.069079   15646 certs.go:194] generating shared ca certs ...
	I1014 13:39:26.069100   15646 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:26.069269   15646 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 13:39:26.255409   15646 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt ...
	I1014 13:39:26.255436   15646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt: {Name:mk6d2468f99b8c4287fe2a238d837c16037ad4fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:26.255590   15646 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key ...
	I1014 13:39:26.255603   15646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key: {Name:mkdcb4871014a40ba9ec5ec69c1557d9dcc077f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:26.255676   15646 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 13:39:26.556583   15646 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt ...
	I1014 13:39:26.556616   15646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt: {Name:mk85c6001f322affd46dcd9480619fd86038d31e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:26.556794   15646 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key ...
	I1014 13:39:26.556806   15646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key: {Name:mk90c3216b24609d702953b1a1eea2d38998c342 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:26.556876   15646 certs.go:256] generating profile certs ...
	I1014 13:39:26.556926   15646 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.key
	I1014 13:39:26.556941   15646 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt with IP's: []
	I1014 13:39:26.768836   15646 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt ...
	I1014 13:39:26.768872   15646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: {Name:mkc64667b6f7d9ba3450cf77fbbbf751d5546cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:26.769070   15646 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.key ...
	I1014 13:39:26.769083   15646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.key: {Name:mk18645965524d2c8fb3313f2197b04a4cf88847 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:26.769162   15646 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/apiserver.key.570273a5
	I1014 13:39:26.769183   15646 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/apiserver.crt.570273a5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.177]
	I1014 13:39:26.893389   15646 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/apiserver.crt.570273a5 ...
	I1014 13:39:26.893417   15646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/apiserver.crt.570273a5: {Name:mkf95be3ae2a42f2d8a69336c3a3c6ee5d6607f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:26.893570   15646 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/apiserver.key.570273a5 ...
	I1014 13:39:26.893582   15646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/apiserver.key.570273a5: {Name:mk6184049e18fea6750120810a3ca5a8f6fd8446 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:26.893652   15646 certs.go:381] copying /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/apiserver.crt.570273a5 -> /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/apiserver.crt
	I1014 13:39:26.893738   15646 certs.go:385] copying /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/apiserver.key.570273a5 -> /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/apiserver.key
	I1014 13:39:26.893790   15646 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/proxy-client.key
	I1014 13:39:26.893803   15646 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/proxy-client.crt with IP's: []
	I1014 13:39:27.089290   15646 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/proxy-client.crt ...
	I1014 13:39:27.089323   15646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/proxy-client.crt: {Name:mk42855ca2b5da79e664e15abbca8e866afd2d08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:27.089487   15646 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/proxy-client.key ...
	I1014 13:39:27.089500   15646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/proxy-client.key: {Name:mk34221d212b4f85855df1891610069be5307a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:27.089680   15646 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 13:39:27.089713   15646 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 13:39:27.089737   15646 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 13:39:27.089761   15646 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 13:39:27.090299   15646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 13:39:27.118965   15646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 13:39:27.144956   15646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 13:39:27.171082   15646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 13:39:27.195761   15646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1014 13:39:27.220642   15646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 13:39:27.246982   15646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 13:39:27.273012   15646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 13:39:27.299157   15646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 13:39:27.325164   15646 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 13:39:27.342963   15646 ssh_runner.go:195] Run: openssl version
	I1014 13:39:27.348912   15646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 13:39:27.360496   15646 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:39:27.365200   15646 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:39:27.365246   15646 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:39:27.371664   15646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 13:39:27.383289   15646 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 13:39:27.387742   15646 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 13:39:27.387786   15646 kubeadm.go:392] StartCluster: {Name:addons-313496 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-313496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.177 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 13:39:27.387853   15646 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 13:39:27.387894   15646 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 13:39:27.425497   15646 cri.go:89] found id: ""
	I1014 13:39:27.425557   15646 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 13:39:27.438159   15646 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 13:39:27.453464   15646 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 13:39:27.464089   15646 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 13:39:27.464108   15646 kubeadm.go:157] found existing configuration files:
	
	I1014 13:39:27.464150   15646 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 13:39:27.475257   15646 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 13:39:27.475320   15646 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 13:39:27.491753   15646 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 13:39:27.500825   15646 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 13:39:27.500889   15646 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 13:39:27.510158   15646 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 13:39:27.518924   15646 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 13:39:27.518968   15646 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 13:39:27.527842   15646 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 13:39:27.536439   15646 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 13:39:27.536492   15646 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 13:39:27.545597   15646 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 13:39:27.601944   15646 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1014 13:39:27.602063   15646 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 13:39:27.701322   15646 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 13:39:27.701462   15646 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 13:39:27.701613   15646 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 13:39:27.713345   15646 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 13:39:27.827892   15646 out.go:235]   - Generating certificates and keys ...
	I1014 13:39:27.828001   15646 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 13:39:27.828073   15646 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 13:39:27.946021   15646 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 13:39:28.060767   15646 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1014 13:39:28.289701   15646 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1014 13:39:28.548524   15646 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1014 13:39:28.697329   15646 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1014 13:39:28.697488   15646 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-313496 localhost] and IPs [192.168.39.177 127.0.0.1 ::1]
	I1014 13:39:28.765131   15646 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1014 13:39:28.765304   15646 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-313496 localhost] and IPs [192.168.39.177 127.0.0.1 ::1]
	I1014 13:39:29.101863   15646 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 13:39:29.551101   15646 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 13:39:29.663371   15646 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1014 13:39:29.663674   15646 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 13:39:29.865105   15646 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 13:39:29.952155   15646 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 13:39:30.044018   15646 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 13:39:30.256677   15646 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 13:39:30.338557   15646 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 13:39:30.339041   15646 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 13:39:30.341405   15646 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 13:39:30.343273   15646 out.go:235]   - Booting up control plane ...
	I1014 13:39:30.343393   15646 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 13:39:30.343512   15646 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 13:39:30.343621   15646 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 13:39:30.358628   15646 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 13:39:30.364469   15646 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 13:39:30.364536   15646 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 13:39:30.493913   15646 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 13:39:30.494048   15646 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 13:39:30.993569   15646 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.204534ms
	I1014 13:39:30.993706   15646 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1014 13:39:36.492451   15646 kubeadm.go:310] [api-check] The API server is healthy after 5.501411992s
	I1014 13:39:36.505466   15646 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 13:39:36.518910   15646 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 13:39:36.547182   15646 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 13:39:36.547439   15646 kubeadm.go:310] [mark-control-plane] Marking the node addons-313496 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 13:39:36.560761   15646 kubeadm.go:310] [bootstrap-token] Using token: eva5uq.q6cssgtl8dwhgruv
	I1014 13:39:36.562053   15646 out.go:235]   - Configuring RBAC rules ...
	I1014 13:39:36.562186   15646 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 13:39:36.578636   15646 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 13:39:36.587355   15646 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 13:39:36.591902   15646 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 13:39:36.599529   15646 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 13:39:36.605539   15646 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 13:39:36.899618   15646 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 13:39:37.328157   15646 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1014 13:39:37.898413   15646 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1014 13:39:37.899386   15646 kubeadm.go:310] 
	I1014 13:39:37.899472   15646 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1014 13:39:37.899485   15646 kubeadm.go:310] 
	I1014 13:39:37.899630   15646 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1014 13:39:37.899651   15646 kubeadm.go:310] 
	I1014 13:39:37.899704   15646 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1014 13:39:37.899762   15646 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 13:39:37.899813   15646 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 13:39:37.899821   15646 kubeadm.go:310] 
	I1014 13:39:37.899865   15646 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1014 13:39:37.899877   15646 kubeadm.go:310] 
	I1014 13:39:37.899948   15646 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 13:39:37.899958   15646 kubeadm.go:310] 
	I1014 13:39:37.900034   15646 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1014 13:39:37.900123   15646 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 13:39:37.900192   15646 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 13:39:37.900200   15646 kubeadm.go:310] 
	I1014 13:39:37.900273   15646 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 13:39:37.900345   15646 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1014 13:39:37.900351   15646 kubeadm.go:310] 
	I1014 13:39:37.900429   15646 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token eva5uq.q6cssgtl8dwhgruv \
	I1014 13:39:37.900558   15646 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 \
	I1014 13:39:37.900582   15646 kubeadm.go:310] 	--control-plane 
	I1014 13:39:37.900597   15646 kubeadm.go:310] 
	I1014 13:39:37.900677   15646 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1014 13:39:37.900687   15646 kubeadm.go:310] 
	I1014 13:39:37.900764   15646 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token eva5uq.q6cssgtl8dwhgruv \
	I1014 13:39:37.900855   15646 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 
	I1014 13:39:37.901743   15646 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 13:39:37.901832   15646 cni.go:84] Creating CNI manager for ""
	I1014 13:39:37.901849   15646 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 13:39:37.903695   15646 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 13:39:37.905081   15646 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 13:39:37.916217   15646 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 13:39:37.935796   15646 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 13:39:37.935872   15646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:37.935873   15646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-313496 minikube.k8s.io/updated_at=2024_10_14T13_39_37_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=addons-313496 minikube.k8s.io/primary=true
	I1014 13:39:38.093769   15646 ops.go:34] apiserver oom_adj: -16
	I1014 13:39:38.093895   15646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:38.594442   15646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:39.094058   15646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:39.594381   15646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:40.093992   15646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:40.593955   15646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:41.094408   15646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:41.594198   15646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:42.094364   15646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:42.202859   15646 kubeadm.go:1113] duration metric: took 4.267047648s to wait for elevateKubeSystemPrivileges
	I1014 13:39:42.202892   15646 kubeadm.go:394] duration metric: took 14.815109732s to StartCluster
	I1014 13:39:42.202908   15646 settings.go:142] acquiring lock: {Name:mk9f6f6b9dc8c3435472077cbb9091b8e648d1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:42.203041   15646 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 13:39:42.203403   15646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/kubeconfig: {Name:mk17dd47b52fd7a8ee17f563c35b22ef1b7788f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:42.203649   15646 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1014 13:39:42.203676   15646 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.177 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 13:39:42.203723   15646 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1014 13:39:42.203835   15646 addons.go:69] Setting yakd=true in profile "addons-313496"
	I1014 13:39:42.203851   15646 config.go:182] Loaded profile config "addons-313496": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:39:42.203859   15646 addons.go:234] Setting addon yakd=true in "addons-313496"
	I1014 13:39:42.203864   15646 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-313496"
	I1014 13:39:42.203869   15646 addons.go:69] Setting storage-provisioner=true in profile "addons-313496"
	I1014 13:39:42.203881   15646 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-313496"
	I1014 13:39:42.203849   15646 addons.go:69] Setting inspektor-gadget=true in profile "addons-313496"
	I1014 13:39:42.203885   15646 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-313496"
	I1014 13:39:42.203902   15646 addons.go:234] Setting addon inspektor-gadget=true in "addons-313496"
	I1014 13:39:42.203905   15646 addons.go:69] Setting registry=true in profile "addons-313496"
	I1014 13:39:42.203909   15646 host.go:66] Checking if "addons-313496" exists ...
	I1014 13:39:42.203912   15646 addons.go:69] Setting volcano=true in profile "addons-313496"
	I1014 13:39:42.203915   15646 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-313496"
	I1014 13:39:42.203919   15646 addons.go:234] Setting addon registry=true in "addons-313496"
	I1014 13:39:42.203926   15646 addons.go:234] Setting addon volcano=true in "addons-313496"
	I1014 13:39:42.203942   15646 addons.go:69] Setting volumesnapshots=true in profile "addons-313496"
	I1014 13:39:42.203946   15646 host.go:66] Checking if "addons-313496" exists ...
	I1014 13:39:42.203904   15646 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-313496"
	I1014 13:39:42.203952   15646 host.go:66] Checking if "addons-313496" exists ...
	I1014 13:39:42.203956   15646 addons.go:234] Setting addon volumesnapshots=true in "addons-313496"
	I1014 13:39:42.203971   15646 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-313496"
	I1014 13:39:42.203976   15646 host.go:66] Checking if "addons-313496" exists ...
	I1014 13:39:42.204309   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.204323   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.204350   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.204390   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.204406   15646 addons.go:69] Setting cloud-spanner=true in profile "addons-313496"
	I1014 13:39:42.204418   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.204425   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.204431   15646 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-313496"
	I1014 13:39:42.204450   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.204459   15646 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-313496"
	I1014 13:39:42.204467   15646 addons.go:69] Setting gcp-auth=true in profile "addons-313496"
	I1014 13:39:42.204486   15646 addons.go:69] Setting ingress=true in profile "addons-313496"
	I1014 13:39:42.204498   15646 addons.go:69] Setting default-storageclass=true in profile "addons-313496"
	I1014 13:39:42.204510   15646 addons.go:69] Setting ingress-dns=true in profile "addons-313496"
	I1014 13:39:42.204515   15646 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-313496"
	I1014 13:39:42.204520   15646 addons.go:234] Setting addon ingress-dns=true in "addons-313496"
	I1014 13:39:42.204546   15646 host.go:66] Checking if "addons-313496" exists ...
	I1014 13:39:42.204558   15646 addons.go:69] Setting metrics-server=true in profile "addons-313496"
	I1014 13:39:42.204583   15646 addons.go:234] Setting addon metrics-server=true in "addons-313496"
	I1014 13:39:42.204613   15646 host.go:66] Checking if "addons-313496" exists ...
	I1014 13:39:42.204422   15646 addons.go:234] Setting addon cloud-spanner=true in "addons-313496"
	I1014 13:39:42.204671   15646 host.go:66] Checking if "addons-313496" exists ...
	I1014 13:39:42.203948   15646 host.go:66] Checking if "addons-313496" exists ...
	I1014 13:39:42.204857   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.204882   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.204903   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.204929   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.204381   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.204952   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.204489   15646 mustload.go:65] Loading cluster: addons-313496
	I1014 13:39:42.204973   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.204490   15646 host.go:66] Checking if "addons-313496" exists ...
	I1014 13:39:42.204993   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.204501   15646 addons.go:234] Setting addon ingress=true in "addons-313496"
	I1014 13:39:42.203895   15646 host.go:66] Checking if "addons-313496" exists ...
	I1014 13:39:42.203897   15646 addons.go:234] Setting addon storage-provisioner=true in "addons-313496"
	I1014 13:39:42.203930   15646 host.go:66] Checking if "addons-313496" exists ...
	I1014 13:39:42.205157   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.205183   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.204351   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.205311   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.205334   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.205388   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.205391   15646 host.go:66] Checking if "addons-313496" exists ...
	I1014 13:39:42.205410   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.205613   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.205654   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.206750   15646 host.go:66] Checking if "addons-313496" exists ...
	I1014 13:39:42.206835   15646 out.go:177] * Verifying Kubernetes components...
	I1014 13:39:42.208771   15646 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:39:42.225995   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39055
	I1014 13:39:42.226275   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36713
	I1014 13:39:42.226405   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42089
	I1014 13:39:42.226679   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.226767   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.226902   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39259
	I1014 13:39:42.226975   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.227146   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.227156   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.227160   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.227169   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.227338   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.227524   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.227552   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.227757   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.227776   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.227777   15646 main.go:141] libmachine: (addons-313496) Calling .GetState
	I1014 13:39:42.228189   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.228235   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.228289   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.228311   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.228594   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.228874   15646 main.go:141] libmachine: (addons-313496) Calling .GetState
	I1014 13:39:42.229491   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.232902   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35653
	I1014 13:39:42.234981   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.235020   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.235079   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.235118   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.236142   15646 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-313496"
	I1014 13:39:42.236181   15646 host.go:66] Checking if "addons-313496" exists ...
	I1014 13:39:42.236430   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.236484   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.236957   15646 config.go:182] Loaded profile config "addons-313496": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:39:42.237351   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.237389   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.237907   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.237950   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.264497   15646 addons.go:234] Setting addon default-storageclass=true in "addons-313496"
	I1014 13:39:42.264561   15646 host.go:66] Checking if "addons-313496" exists ...
	I1014 13:39:42.265068   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.267185   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.267228   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.267553   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43839
	I1014 13:39:42.267885   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46467
	I1014 13:39:42.267996   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42453
	I1014 13:39:42.268462   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.268544   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.270156   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.270266   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.270348   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.270530   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.270563   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.270742   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.270760   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.271031   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.271051   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.271204   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.271429   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.271839   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.271873   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.272182   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.272200   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.272933   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.272969   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.273495   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.273501   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39887
	I1014 13:39:42.273568   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.274118   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.274221   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.274222   15646 main.go:141] libmachine: (addons-313496) Calling .GetState
	I1014 13:39:42.274315   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.274768   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.274793   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.275310   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.275961   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.276009   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.276199   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:42.278167   15646 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1014 13:39:42.279251   15646 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1014 13:39:42.279270   15646 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1014 13:39:42.279291   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:42.282850   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.283243   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:42.283268   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.283566   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:42.283739   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:42.283850   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:42.283957   15646 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa Username:docker}
	I1014 13:39:42.286562   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39943
	I1014 13:39:42.286763   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38977
	I1014 13:39:42.286813   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38829
	I1014 13:39:42.286822   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33117
	I1014 13:39:42.287261   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.287271   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.287544   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.287788   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.287809   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.288529   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45331
	I1014 13:39:42.288781   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.288924   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.288938   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.288958   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.288988   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.289024   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.289671   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.289729   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.290245   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.290684   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46139
	I1014 13:39:42.291027   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.291064   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.291096   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.291115   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.291135   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.291372   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.291612   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.291806   15646 main.go:141] libmachine: (addons-313496) Calling .GetState
	I1014 13:39:42.292356   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.292389   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.292660   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.292876   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.292906   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.293626   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.293773   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46209
	I1014 13:39:42.294431   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.294468   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.294905   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.295455   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.295472   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.295651   15646 host.go:66] Checking if "addons-313496" exists ...
	I1014 13:39:42.295857   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.296026   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.296085   15646 main.go:141] libmachine: (addons-313496) Calling .GetState
	I1014 13:39:42.296209   15646 main.go:141] libmachine: (addons-313496) Calling .GetState
	I1014 13:39:42.298007   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:42.299160   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:42.299196   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39281
	I1014 13:39:42.299161   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.299237   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.299949   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.299986   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.300807   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.300879   15646 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1014 13:39:42.301612   15646 out.go:177]   - Using image docker.io/busybox:stable
	I1014 13:39:42.306513   15646 out.go:177]   - Using image docker.io/registry:2.8.3
	I1014 13:39:42.306892   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.306914   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.307396   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.308050   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.308076   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.308925   15646 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1014 13:39:42.309138   15646 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1014 13:39:42.309153   15646 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1014 13:39:42.309182   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:42.310703   15646 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1014 13:39:42.310722   15646 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1014 13:39:42.310743   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:42.313269   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.313690   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:42.313714   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.314059   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:42.314267   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:42.314425   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:42.314577   15646 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa Username:docker}
	I1014 13:39:42.315615   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.316306   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:42.316325   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.316503   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:42.316652   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:42.316782   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:42.316890   15646 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa Username:docker}
	I1014 13:39:42.323002   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46057
	I1014 13:39:42.323563   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.324196   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.324214   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.324612   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.325187   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.325227   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.326442   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43329
	I1014 13:39:42.327489   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.327989   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.328004   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.328351   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.328497   15646 main.go:141] libmachine: (addons-313496) Calling .GetState
	I1014 13:39:42.330042   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:42.330633   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45593
	I1014 13:39:42.331146   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.331582   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.331598   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.332122   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.332468   15646 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1014 13:39:42.332696   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.332720   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.333062   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41445
	I1014 13:39:42.333224   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38645
	I1014 13:39:42.333805   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.334195   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.334425   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39819
	I1014 13:39:42.334798   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.334912   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.334940   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.334953   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.334956   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.335166   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38027
	I1014 13:39:42.335267   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.335283   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.335319   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.335495   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:42.335546   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.335675   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.335692   15646 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1014 13:39:42.336411   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.336788   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.336802   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.336857   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36123
	I1014 13:39:42.337319   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.337710   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.337982   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.337999   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.338036   15646 main.go:141] libmachine: (addons-313496) Calling .GetState
	I1014 13:39:42.338355   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.338438   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.338481   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.338573   15646 main.go:141] libmachine: (addons-313496) Calling .GetState
	I1014 13:39:42.338866   15646 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1014 13:39:42.339186   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.339224   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.341316   15646 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1014 13:39:42.341740   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:42.341742   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:42.341934   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:42.341943   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:42.343271   15646 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1014 13:39:42.343316   15646 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1014 13:39:42.343823   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:42.343862   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:42.343877   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:42.343890   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:42.343902   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:42.344615   15646 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1014 13:39:42.344632   15646 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1014 13:39:42.344649   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:42.345580   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43085
	I1014 13:39:42.345953   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:42.345966   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	W1014 13:39:42.346032   15646 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1014 13:39:42.346499   15646 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1014 13:39:42.346945   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.348191   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41387
	I1014 13:39:42.348649   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.348851   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.348864   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.349151   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.349168   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.349573   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.349734   15646 main.go:141] libmachine: (addons-313496) Calling .GetState
	I1014 13:39:42.350076   15646 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1014 13:39:42.350411   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.350647   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.350957   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:42.350975   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.351234   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:42.351385   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:42.351573   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:42.351625   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:42.351909   15646 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa Username:docker}
	I1014 13:39:42.352179   15646 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1014 13:39:42.352652   15646 main.go:141] libmachine: (addons-313496) Calling .GetState
	I1014 13:39:42.353118   15646 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1014 13:39:42.353135   15646 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1014 13:39:42.353154   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:42.353687   15646 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1014 13:39:42.354771   15646 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1014 13:39:42.354786   15646 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1014 13:39:42.354804   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:42.356301   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:42.357778   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.357803   15646 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1014 13:39:42.358217   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:42.358236   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.358453   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:42.358640   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:42.358770   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:42.358885   15646 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa Username:docker}
	I1014 13:39:42.358985   15646 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1014 13:39:42.358996   15646 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1014 13:39:42.359012   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:42.359675   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39315
	I1014 13:39:42.359819   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.360128   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:42.360154   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.360163   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.360349   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:42.360649   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:42.360815   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:42.360925   15646 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa Username:docker}
	I1014 13:39:42.361270   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.361283   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.361587   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.361741   15646 main.go:141] libmachine: (addons-313496) Calling .GetState
	I1014 13:39:42.362145   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.362944   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:42.363810   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:42.363822   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.363841   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46577
	I1014 13:39:42.363990   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:42.364173   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:42.364224   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.364316   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:42.364373   15646 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1014 13:39:42.364597   15646 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa Username:docker}
	I1014 13:39:42.364666   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.364676   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.365638   15646 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1014 13:39:42.365655   15646 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1014 13:39:42.365670   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:42.365845   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40461
	I1014 13:39:42.365928   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.365995   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34805
	I1014 13:39:42.366109   15646 main.go:141] libmachine: (addons-313496) Calling .GetState
	I1014 13:39:42.366365   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.366796   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.366854   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.367045   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.367284   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.367547   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.367560   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.367609   15646 main.go:141] libmachine: (addons-313496) Calling .GetState
	I1014 13:39:42.367778   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37369
	I1014 13:39:42.367977   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.368213   15646 main.go:141] libmachine: (addons-313496) Calling .GetState
	I1014 13:39:42.368366   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.368894   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.368911   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.368967   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.369276   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.369431   15646 main.go:141] libmachine: (addons-313496) Calling .GetState
	I1014 13:39:42.369580   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:42.369773   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:42.369959   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:42.369990   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.370090   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:42.370218   15646 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa Username:docker}
	I1014 13:39:42.371237   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:42.371253   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:42.371259   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42769
	I1014 13:39:42.371311   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:42.371375   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:42.371695   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.372097   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.372116   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.372869   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.372999   15646 main.go:141] libmachine: (addons-313496) Calling .GetState
	I1014 13:39:42.373869   15646 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1014 13:39:42.373883   15646 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 13:39:42.373883   15646 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1014 13:39:42.374346   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:42.373947   15646 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1014 13:39:42.375479   15646 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 13:39:42.375500   15646 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 13:39:42.375517   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:42.376015   15646 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1014 13:39:42.376024   15646 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1014 13:39:42.376101   15646 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1014 13:39:42.376118   15646 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1014 13:39:42.376134   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:42.376620   15646 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1014 13:39:42.376640   15646 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1014 13:39:42.376656   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:42.376887   15646 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1014 13:39:42.376898   15646 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1014 13:39:42.376937   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:42.377939   15646 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1014 13:39:42.379302   15646 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1014 13:39:42.379322   15646 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1014 13:39:42.379329   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.379350   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:42.380269   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:42.380300   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.380572   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:42.380754   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:42.380889   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:42.381203   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.381531   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:42.381560   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.381634   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.381664   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:42.381862   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:42.381981   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:42.382079   15646 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa Username:docker}
	I1014 13:39:42.382782   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.382933   15646 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa Username:docker}
	I1014 13:39:42.383154   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:42.383354   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:42.383498   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:42.383614   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:42.383922   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.384284   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.384333   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.384368   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:42.384440   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.384400   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:42.384409   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:42.384415   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:42.384680   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:42.384753   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:42.384755   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:42.384876   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:42.384909   15646 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa Username:docker}
	I1014 13:39:42.384927   15646 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa Username:docker}
	I1014 13:39:42.385189   15646 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa Username:docker}
	I1014 13:39:42.387638   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34495
	I1014 13:39:42.388038   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.388454   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.388472   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.388758   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.388883   15646 main.go:141] libmachine: (addons-313496) Calling .GetState
	I1014 13:39:42.390116   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:42.390389   15646 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 13:39:42.390403   15646 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 13:39:42.390418   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:42.392871   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.393174   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:42.393209   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.393313   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:42.393455   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:42.393565   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:42.393677   15646 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa Username:docker}
	I1014 13:39:42.684283   15646 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1014 13:39:42.684316   15646 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1014 13:39:42.693645   15646 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 13:39:42.693913   15646 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1014 13:39:42.766772   15646 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1014 13:39:42.766809   15646 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1014 13:39:42.802942   15646 node_ready.go:35] waiting up to 6m0s for node "addons-313496" to be "Ready" ...
	I1014 13:39:42.808996   15646 node_ready.go:49] node "addons-313496" has status "Ready":"True"
	I1014 13:39:42.809022   15646 node_ready.go:38] duration metric: took 6.048354ms for node "addons-313496" to be "Ready" ...
	I1014 13:39:42.809034   15646 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 13:39:42.823858   15646 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1014 13:39:42.835590   15646 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-69r77" in "kube-system" namespace to be "Ready" ...
	I1014 13:39:42.887640   15646 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 13:39:42.955240   15646 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 13:39:42.956393   15646 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1014 13:39:42.970325   15646 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1014 13:39:42.970352   15646 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1014 13:39:42.993800   15646 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1014 13:39:42.993833   15646 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1014 13:39:42.993905   15646 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1014 13:39:42.995375   15646 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1014 13:39:43.016630   15646 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1014 13:39:43.016655   15646 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1014 13:39:43.021758   15646 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1014 13:39:43.021786   15646 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1014 13:39:43.032306   15646 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1014 13:39:43.039172   15646 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1014 13:39:43.039199   15646 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1014 13:39:43.048622   15646 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1014 13:39:43.078447   15646 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1014 13:39:43.078472   15646 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1014 13:39:43.104107   15646 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1014 13:39:43.104132   15646 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1014 13:39:43.136090   15646 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1014 13:39:43.136119   15646 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1014 13:39:43.246085   15646 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1014 13:39:43.246111   15646 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1014 13:39:43.252548   15646 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 13:39:43.263816   15646 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1014 13:39:43.263835   15646 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1014 13:39:43.347955   15646 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1014 13:39:43.394250   15646 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1014 13:39:43.394280   15646 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1014 13:39:43.458649   15646 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1014 13:39:43.458674   15646 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1014 13:39:43.460932   15646 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 13:39:43.460945   15646 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1014 13:39:43.549628   15646 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1014 13:39:43.549651   15646 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1014 13:39:43.649470   15646 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1014 13:39:43.649494   15646 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1014 13:39:43.703258   15646 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1014 13:39:43.703279   15646 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1014 13:39:43.723388   15646 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 13:39:43.795726   15646 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1014 13:39:43.795756   15646 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1014 13:39:43.865182   15646 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1014 13:39:43.897163   15646 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1014 13:39:44.074622   15646 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1014 13:39:44.074650   15646 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1014 13:39:44.304178   15646 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1014 13:39:44.304205   15646 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1014 13:39:44.545782   15646 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1014 13:39:44.545807   15646 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1014 13:39:44.841969   15646 pod_ready.go:103] pod "coredns-7c65d6cfc9-69r77" in "kube-system" namespace has status "Ready":"False"
	I1014 13:39:44.882655   15646 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.188706042s)
	I1014 13:39:44.882693   15646 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1014 13:39:45.038028   15646 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1014 13:39:45.038049   15646 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1014 13:39:45.386815   15646 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-313496" context rescaled to 1 replicas
	I1014 13:39:45.390491   15646 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1014 13:39:45.390510   15646 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1014 13:39:45.592885   15646 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.768989319s)
	I1014 13:39:45.592929   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:45.592940   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:45.592945   15646 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.705272099s)
	I1014 13:39:45.592985   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:45.593000   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:45.593237   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:45.593254   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:45.593263   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:45.593270   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:45.593289   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:45.593301   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:45.593310   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:45.593317   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:45.593596   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:45.593611   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:45.593625   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:45.593636   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:45.593641   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:45.593652   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:45.618234   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:45.618254   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:45.618519   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:45.618559   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:45.618568   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:45.651367   15646 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1014 13:39:45.651396   15646 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1014 13:39:46.001746   15646 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1014 13:39:46.843144   15646 pod_ready.go:103] pod "coredns-7c65d6cfc9-69r77" in "kube-system" namespace has status "Ready":"False"
	I1014 13:39:47.514577   15646 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.55929112s)
	I1014 13:39:47.514659   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:47.514672   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:47.514945   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:47.514955   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:47.514965   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:47.514974   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:47.514981   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:47.515195   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:47.515207   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:47.933264   15646 pod_ready.go:93] pod "coredns-7c65d6cfc9-69r77" in "kube-system" namespace has status "Ready":"True"
	I1014 13:39:47.933286   15646 pod_ready.go:82] duration metric: took 5.097659847s for pod "coredns-7c65d6cfc9-69r77" in "kube-system" namespace to be "Ready" ...
	I1014 13:39:47.933297   15646 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gmrsw" in "kube-system" namespace to be "Ready" ...
	I1014 13:39:48.952604   15646 pod_ready.go:93] pod "coredns-7c65d6cfc9-gmrsw" in "kube-system" namespace has status "Ready":"True"
	I1014 13:39:48.952626   15646 pod_ready.go:82] duration metric: took 1.019321331s for pod "coredns-7c65d6cfc9-gmrsw" in "kube-system" namespace to be "Ready" ...
	I1014 13:39:48.952635   15646 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-313496" in "kube-system" namespace to be "Ready" ...
	I1014 13:39:48.967174   15646 pod_ready.go:93] pod "etcd-addons-313496" in "kube-system" namespace has status "Ready":"True"
	I1014 13:39:48.967195   15646 pod_ready.go:82] duration metric: took 14.554496ms for pod "etcd-addons-313496" in "kube-system" namespace to be "Ready" ...
	I1014 13:39:48.967204   15646 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-313496" in "kube-system" namespace to be "Ready" ...
	I1014 13:39:49.360379   15646 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1014 13:39:49.360421   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:49.363514   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:49.363880   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:49.363921   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:49.364125   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:49.364299   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:49.364464   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:49.364578   15646 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa Username:docker}
	I1014 13:39:49.826808   15646 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1014 13:39:50.004828   15646 pod_ready.go:93] pod "kube-apiserver-addons-313496" in "kube-system" namespace has status "Ready":"True"
	I1014 13:39:50.004851   15646 pod_ready.go:82] duration metric: took 1.037640433s for pod "kube-apiserver-addons-313496" in "kube-system" namespace to be "Ready" ...
	I1014 13:39:50.004861   15646 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-313496" in "kube-system" namespace to be "Ready" ...
	I1014 13:39:50.013782   15646 pod_ready.go:93] pod "kube-controller-manager-addons-313496" in "kube-system" namespace has status "Ready":"True"
	I1014 13:39:50.013803   15646 pod_ready.go:82] duration metric: took 8.935744ms for pod "kube-controller-manager-addons-313496" in "kube-system" namespace to be "Ready" ...
	I1014 13:39:50.013813   15646 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7zvnt" in "kube-system" namespace to be "Ready" ...
	I1014 13:39:50.030942   15646 pod_ready.go:93] pod "kube-proxy-7zvnt" in "kube-system" namespace has status "Ready":"True"
	I1014 13:39:50.030963   15646 pod_ready.go:82] duration metric: took 17.143392ms for pod "kube-proxy-7zvnt" in "kube-system" namespace to be "Ready" ...
	I1014 13:39:50.030972   15646 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-313496" in "kube-system" namespace to be "Ready" ...
	I1014 13:39:50.072642   15646 addons.go:234] Setting addon gcp-auth=true in "addons-313496"
	I1014 13:39:50.072693   15646 host.go:66] Checking if "addons-313496" exists ...
	I1014 13:39:50.073063   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:50.073106   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:50.088039   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43183
	I1014 13:39:50.088975   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:50.089537   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:50.089558   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:50.089869   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:50.090391   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:50.090421   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:50.105781   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38569
	I1014 13:39:50.106309   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:50.106798   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:50.106820   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:50.107193   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:50.107380   15646 main.go:141] libmachine: (addons-313496) Calling .GetState
	I1014 13:39:50.108955   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:50.109147   15646 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1014 13:39:50.109172   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:50.111728   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:50.112149   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:50.112178   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:50.112325   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:50.112483   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:50.112625   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:50.112732   15646 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa Username:docker}
	I1014 13:39:50.244152   15646 pod_ready.go:93] pod "kube-scheduler-addons-313496" in "kube-system" namespace has status "Ready":"True"
	I1014 13:39:50.244175   15646 pod_ready.go:82] duration metric: took 213.196586ms for pod "kube-scheduler-addons-313496" in "kube-system" namespace to be "Ready" ...
	I1014 13:39:50.244185   15646 pod_ready.go:39] duration metric: took 7.435139141s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 13:39:50.244201   15646 api_server.go:52] waiting for apiserver process to appear ...
	I1014 13:39:50.244266   15646 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 13:39:50.966025   15646 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.009596819s)
	I1014 13:39:50.966069   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:50.966078   15646 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.972147576s)
	I1014 13:39:50.966097   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:50.966080   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:50.966111   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:50.966182   15646 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.970773376s)
	I1014 13:39:50.966202   15646 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (7.933872301s)
	I1014 13:39:50.966218   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:50.966227   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:50.966230   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:50.966236   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:50.966523   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:50.966555   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:50.966581   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:50.966611   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:50.966610   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:50.966618   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:50.966578   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:50.966630   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:50.966638   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:50.966641   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:50.966621   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:50.966639   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:50.966665   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:50.966668   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:50.966673   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:50.966714   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:50.966725   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:50.966731   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:50.966732   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:50.966976   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:50.967005   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:50.967013   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:50.967059   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:50.967080   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:50.967087   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:50.968378   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:50.968397   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:50.968622   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:50.968654   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:50.968672   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:50.968685   15646 addons.go:475] Verifying addon ingress=true in "addons-313496"
	I1014 13:39:50.969687   15646 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.921036739s)
	I1014 13:39:50.969720   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:50.969730   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:50.969778   15646 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.717202406s)
	I1014 13:39:50.969810   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:50.969820   15646 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.621838251s)
	I1014 13:39:50.969835   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:50.969845   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:50.969821   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:50.969893   15646 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.246477089s)
	I1014 13:39:50.969909   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:50.969923   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:50.969924   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:50.969926   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:50.969932   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:50.969940   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:50.969946   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:50.969984   15646 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.10477213s)
	I1014 13:39:50.970009   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:50.970022   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:50.970067   15646 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.072874539s)
	I1014 13:39:50.970095   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:50.970108   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	W1014 13:39:50.970101   15646 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1014 13:39:50.970117   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:50.970124   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:50.970124   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:50.970129   15646 retry.go:31] will retry after 197.386027ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1014 13:39:50.970270   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:50.970279   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:50.970371   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:50.970383   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:50.970383   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:50.970391   15646 addons.go:475] Verifying addon registry=true in "addons-313496"
	I1014 13:39:50.970409   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:50.970419   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:50.970426   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:50.970433   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:50.970383   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:50.970627   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:50.970641   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:50.970661   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:50.970671   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:50.970895   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:50.970910   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:50.970959   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:50.970968   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:50.970977   15646 addons.go:475] Verifying addon metrics-server=true in "addons-313496"
	I1014 13:39:50.971132   15646 out.go:177] * Verifying ingress addon...
	I1014 13:39:50.972101   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:50.972137   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:50.972145   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:50.972152   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:50.972159   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:50.972225   15646 out.go:177] * Verifying registry addon...
	I1014 13:39:50.973136   15646 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-313496 service yakd-dashboard -n yakd-dashboard
	
	I1014 13:39:50.974069   15646 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1014 13:39:50.974965   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:50.974977   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:50.974991   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:50.975001   15646 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1014 13:39:50.999401   15646 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1014 13:39:50.999423   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:50.999519   15646 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1014 13:39:50.999537   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:39:51.058405   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:51.058426   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:51.058692   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:51.058711   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:51.168271   15646 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1014 13:39:51.544865   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:51.545177   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:39:51.990522   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:39:51.990613   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:52.484812   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:39:52.485488   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:52.991292   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:52.991835   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:39:53.471206   15646 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.226915486s)
	I1014 13:39:53.471252   15646 api_server.go:72] duration metric: took 11.26754305s to wait for apiserver process to appear ...
	I1014 13:39:53.471260   15646 api_server.go:88] waiting for apiserver healthz status ...
	I1014 13:39:53.471281   15646 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8443/healthz ...
	I1014 13:39:53.471281   15646 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.362116774s)
	I1014 13:39:53.471206   15646 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.469401713s)
	I1014 13:39:53.471393   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:53.471417   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:53.471432   15646 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.303118957s)
	I1014 13:39:53.471465   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:53.471481   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:53.471743   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:53.471757   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:53.471765   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:53.471771   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:53.472071   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:53.472126   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:53.472144   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:53.472159   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:53.472169   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:53.472272   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:53.472284   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:53.472296   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:53.472327   15646 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-313496"
	I1014 13:39:53.472426   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:53.472604   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:53.472453   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:53.473181   15646 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1014 13:39:53.474089   15646 out.go:177] * Verifying csi-hostpath-driver addon...
	I1014 13:39:53.475706   15646 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1014 13:39:53.476497   15646 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1014 13:39:53.477085   15646 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1014 13:39:53.477106   15646 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1014 13:39:53.483784   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:53.483954   15646 api_server.go:279] https://192.168.39.177:8443/healthz returned 200:
	ok
	I1014 13:39:53.484115   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:39:53.485601   15646 api_server.go:141] control plane version: v1.31.1
	I1014 13:39:53.485620   15646 api_server.go:131] duration metric: took 14.353511ms to wait for apiserver health ...
	I1014 13:39:53.485628   15646 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 13:39:53.498326   15646 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1014 13:39:53.498347   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:39:53.514436   15646 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1014 13:39:53.514462   15646 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1014 13:39:53.532248   15646 system_pods.go:59] 19 kube-system pods found
	I1014 13:39:53.532289   15646 system_pods.go:61] "amd-gpu-device-plugin-m9mtz" [2fc02ee9-2529-4893-abc3-e638a461db45] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1014 13:39:53.532298   15646 system_pods.go:61] "coredns-7c65d6cfc9-69r77" [1c55ebf0-8189-43c8-b05c-375564deee96] Running
	I1014 13:39:53.532305   15646 system_pods.go:61] "coredns-7c65d6cfc9-gmrsw" [bb4aafb5-707d-46b8-8f09-da731dd7b975] Running
	I1014 13:39:53.532312   15646 system_pods.go:61] "csi-hostpath-attacher-0" [35914d73-1e05-4cb8-a4a9-ef439861030f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1014 13:39:53.532318   15646 system_pods.go:61] "csi-hostpath-resizer-0" [d664c078-5d63-4a85-af0e-797d001ec728] Pending
	I1014 13:39:53.532334   15646 system_pods.go:61] "csi-hostpathplugin-vcsrg" [f0796f57-a38e-4662-b0db-f8717051d902] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1014 13:39:53.532343   15646 system_pods.go:61] "etcd-addons-313496" [7f91653e-02a7-4c1b-9e71-445271163d23] Running
	I1014 13:39:53.532349   15646 system_pods.go:61] "kube-apiserver-addons-313496" [4d56adc4-d1cd-4c02-9cc3-92236aaeb40a] Running
	I1014 13:39:53.532355   15646 system_pods.go:61] "kube-controller-manager-addons-313496" [584d1c59-ade1-4c41-96fe-8d7b394b06f3] Running
	I1014 13:39:53.532364   15646 system_pods.go:61] "kube-ingress-dns-minikube" [664164ae-6d4b-47d0-8091-c4a9ae18ae9a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1014 13:39:53.532372   15646 system_pods.go:61] "kube-proxy-7zvnt" [357a51d7-a6c0-4616-aef2-fe9c7074e51d] Running
	I1014 13:39:53.532379   15646 system_pods.go:61] "kube-scheduler-addons-313496" [ec2ff7d8-274f-469f-a656-1f1267296410] Running
	I1014 13:39:53.532388   15646 system_pods.go:61] "metrics-server-84c5f94fbc-cggcl" [33ed4d65-0bcf-4a12-beaf-298d4c5f2714] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 13:39:53.532400   15646 system_pods.go:61] "nvidia-device-plugin-daemonset-kkmfm" [846014ef-c2c5-47a1-b0ae-3e582a248ee6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1014 13:39:53.532412   15646 system_pods.go:61] "registry-66c9cd494c-kxfcz" [a4d53217-34bc-44bb-8e30-d6b8914b6825] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1014 13:39:53.532424   15646 system_pods.go:61] "registry-proxy-xsptb" [ed9b7051-496c-4b26-be7b-c8c2afd04b8e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1014 13:39:53.532436   15646 system_pods.go:61] "snapshot-controller-56fcc65765-ttgh7" [c91f9671-b7dc-43c9-b0f2-347714aec2ba] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1014 13:39:53.532450   15646 system_pods.go:61] "snapshot-controller-56fcc65765-vvqh6" [ee08ee62-c76c-4fcf-947e-9dd882c3e072] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1014 13:39:53.532458   15646 system_pods.go:61] "storage-provisioner" [3ad1bb99-d287-4642-957b-3d383adfa12a] Running
	I1014 13:39:53.532467   15646 system_pods.go:74] duration metric: took 46.83231ms to wait for pod list to return data ...
	I1014 13:39:53.532478   15646 default_sa.go:34] waiting for default service account to be created ...
	I1014 13:39:53.542073   15646 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1014 13:39:53.542104   15646 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1014 13:39:53.551633   15646 default_sa.go:45] found service account: "default"
	I1014 13:39:53.551659   15646 default_sa.go:55] duration metric: took 19.17261ms for default service account to be created ...
	I1014 13:39:53.551670   15646 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 13:39:53.576850   15646 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1014 13:39:53.620425   15646 system_pods.go:86] 19 kube-system pods found
	I1014 13:39:53.620472   15646 system_pods.go:89] "amd-gpu-device-plugin-m9mtz" [2fc02ee9-2529-4893-abc3-e638a461db45] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1014 13:39:53.620481   15646 system_pods.go:89] "coredns-7c65d6cfc9-69r77" [1c55ebf0-8189-43c8-b05c-375564deee96] Running
	I1014 13:39:53.620489   15646 system_pods.go:89] "coredns-7c65d6cfc9-gmrsw" [bb4aafb5-707d-46b8-8f09-da731dd7b975] Running
	I1014 13:39:53.620498   15646 system_pods.go:89] "csi-hostpath-attacher-0" [35914d73-1e05-4cb8-a4a9-ef439861030f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1014 13:39:53.620507   15646 system_pods.go:89] "csi-hostpath-resizer-0" [d664c078-5d63-4a85-af0e-797d001ec728] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1014 13:39:53.620516   15646 system_pods.go:89] "csi-hostpathplugin-vcsrg" [f0796f57-a38e-4662-b0db-f8717051d902] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1014 13:39:53.620524   15646 system_pods.go:89] "etcd-addons-313496" [7f91653e-02a7-4c1b-9e71-445271163d23] Running
	I1014 13:39:53.620532   15646 system_pods.go:89] "kube-apiserver-addons-313496" [4d56adc4-d1cd-4c02-9cc3-92236aaeb40a] Running
	I1014 13:39:53.620538   15646 system_pods.go:89] "kube-controller-manager-addons-313496" [584d1c59-ade1-4c41-96fe-8d7b394b06f3] Running
	I1014 13:39:53.620548   15646 system_pods.go:89] "kube-ingress-dns-minikube" [664164ae-6d4b-47d0-8091-c4a9ae18ae9a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1014 13:39:53.620557   15646 system_pods.go:89] "kube-proxy-7zvnt" [357a51d7-a6c0-4616-aef2-fe9c7074e51d] Running
	I1014 13:39:53.620563   15646 system_pods.go:89] "kube-scheduler-addons-313496" [ec2ff7d8-274f-469f-a656-1f1267296410] Running
	I1014 13:39:53.620570   15646 system_pods.go:89] "metrics-server-84c5f94fbc-cggcl" [33ed4d65-0bcf-4a12-beaf-298d4c5f2714] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 13:39:53.620579   15646 system_pods.go:89] "nvidia-device-plugin-daemonset-kkmfm" [846014ef-c2c5-47a1-b0ae-3e582a248ee6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1014 13:39:53.620588   15646 system_pods.go:89] "registry-66c9cd494c-kxfcz" [a4d53217-34bc-44bb-8e30-d6b8914b6825] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1014 13:39:53.620601   15646 system_pods.go:89] "registry-proxy-xsptb" [ed9b7051-496c-4b26-be7b-c8c2afd04b8e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1014 13:39:53.620610   15646 system_pods.go:89] "snapshot-controller-56fcc65765-ttgh7" [c91f9671-b7dc-43c9-b0f2-347714aec2ba] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1014 13:39:53.620621   15646 system_pods.go:89] "snapshot-controller-56fcc65765-vvqh6" [ee08ee62-c76c-4fcf-947e-9dd882c3e072] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1014 13:39:53.620627   15646 system_pods.go:89] "storage-provisioner" [3ad1bb99-d287-4642-957b-3d383adfa12a] Running
	I1014 13:39:53.620638   15646 system_pods.go:126] duration metric: took 68.960575ms to wait for k8s-apps to be running ...
	I1014 13:39:53.620647   15646 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 13:39:53.620703   15646 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 13:39:53.981738   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:39:53.981838   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:53.983676   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:39:54.483575   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:54.483775   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:39:54.484122   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:39:54.702030   15646 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.125141186s)
	I1014 13:39:54.702083   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:54.702098   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:54.702100   15646 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.08136487s)
	I1014 13:39:54.702126   15646 system_svc.go:56] duration metric: took 1.081477085s WaitForService to wait for kubelet
	I1014 13:39:54.702137   15646 kubeadm.go:582] duration metric: took 12.498427406s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 13:39:54.702163   15646 node_conditions.go:102] verifying NodePressure condition ...
	I1014 13:39:54.702356   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:54.702368   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:54.702370   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:54.702387   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:54.702395   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:54.702641   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:54.702654   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:54.702669   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:54.703591   15646 addons.go:475] Verifying addon gcp-auth=true in "addons-313496"
	I1014 13:39:54.705250   15646 out.go:177] * Verifying gcp-auth addon...
	I1014 13:39:54.707632   15646 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1014 13:39:54.742119   15646 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 13:39:54.742152   15646 node_conditions.go:123] node cpu capacity is 2
	I1014 13:39:54.742167   15646 node_conditions.go:105] duration metric: took 39.99816ms to run NodePressure ...
	I1014 13:39:54.742180   15646 start.go:241] waiting for startup goroutines ...
	I1014 13:39:54.742390   15646 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1014 13:39:54.742404   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:39:54.978423   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:39:54.981546   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:54.982442   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:39:55.211567   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:39:55.484041   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:55.484470   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:39:55.484942   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:39:55.711451   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:39:55.980783   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:39:55.982718   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:39:55.984095   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:56.213871   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:39:56.480283   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:56.480461   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:39:56.482467   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:39:56.711098   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:39:56.978824   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:56.979961   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:39:56.981798   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:39:57.211400   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:39:57.482131   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:39:57.482181   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:57.482408   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:39:57.711282   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:39:57.979255   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:57.979632   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:39:57.982799   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:39:58.211166   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:39:58.479399   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:58.479726   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:39:58.481357   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:39:58.712157   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:39:58.978630   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:58.979112   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:39:58.981473   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:39:59.212550   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:39:59.479071   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:59.479307   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:39:59.481246   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:39:59.712227   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:39:59.978336   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:39:59.978813   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:59.983506   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:00.211109   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:00.479045   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:00.480714   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:00.481796   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:00.711396   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:00.978616   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:00.982175   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:00.982948   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:01.211856   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:01.477964   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:01.478717   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:01.481535   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:01.712287   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:01.979651   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:01.980922   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:01.982427   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:02.211414   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:02.479511   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:02.479842   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:02.481526   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:02.712618   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:02.979311   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:02.979505   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:02.982045   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:03.212265   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:03.482340   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:03.482579   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:03.483854   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:03.712039   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:03.979278   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:03.979581   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:03.981210   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:04.212031   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:04.479908   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:04.479929   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:04.483325   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:04.711575   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:04.979051   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:04.979333   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:04.980786   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:05.212262   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:05.479465   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:05.479936   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:05.481997   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:05.711968   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:05.981135   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:05.981991   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:05.983683   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:06.210972   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:06.481888   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:06.482029   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:06.482678   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:06.711289   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:06.978513   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:06.978765   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:06.980777   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:07.211773   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:07.478715   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:07.479490   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:07.484437   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:07.711453   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:07.979797   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:07.980014   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:07.990967   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:08.211779   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:08.478301   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:08.478976   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:08.481446   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:08.711861   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:08.979533   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:08.980156   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:08.981391   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:09.210791   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:09.479829   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:09.480292   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:09.481876   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:09.861507   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:09.978049   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:09.980513   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:09.980811   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:10.210799   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:10.479529   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:10.480334   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:10.481457   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:10.711355   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:10.979513   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:10.979946   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:10.982322   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:11.210957   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:11.478763   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:11.480690   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:11.481093   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:11.712196   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:11.986955   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:11.987208   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:11.987905   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:12.213203   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:12.479266   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:12.480295   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:12.486884   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:12.712005   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:12.979164   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:12.979782   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:12.981661   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:13.211244   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:13.478327   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:13.478883   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:13.481238   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:13.711953   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:13.980334   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:13.980478   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:13.981305   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:14.211881   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:14.480982   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:14.484294   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:14.485875   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:14.711575   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:14.981397   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:14.984371   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:14.987733   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:15.212040   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:15.485425   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:15.495512   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:15.499103   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:15.712941   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:15.985560   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:15.988037   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:15.988156   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:16.211959   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:16.481669   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:16.483883   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:16.583899   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:16.710964   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:16.978265   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:16.980151   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:16.981729   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:17.211565   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:17.958714   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:17.958864   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:17.959294   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:17.959882   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:18.056188   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:18.056349   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:18.056846   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:18.211840   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:18.486040   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:18.486286   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:18.487008   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:18.711114   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:18.981669   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:18.981780   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:18.982612   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:19.213712   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:19.478887   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:19.479470   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:19.482883   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:19.711964   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:19.979867   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:19.980352   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:19.989632   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:20.212798   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:20.479605   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:20.479871   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:20.481595   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:20.714281   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:20.978378   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:20.979875   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:20.981567   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:21.211517   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:21.481544   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:21.481866   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:21.482687   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:21.711433   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:21.978543   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:21.979945   15646 kapi.go:107] duration metric: took 31.004942682s to wait for kubernetes.io/minikube-addons=registry ...
	I1014 13:40:21.981932   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:22.211700   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:22.478756   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:22.481032   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:22.712478   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:22.978868   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:22.982514   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:23.210834   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:23.478903   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:23.481588   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:23.711977   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:23.980542   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:23.981635   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:24.441967   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:24.478555   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:24.481368   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:24.712090   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:24.979688   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:24.982005   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:25.212156   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:25.480784   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:25.482396   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:25.712843   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:25.979042   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:25.982231   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:26.212266   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:26.479803   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:26.482285   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:26.724256   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:26.978709   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:26.980967   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:27.211848   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:27.479178   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:27.482293   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:27.711702   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:27.979465   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:27.981314   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:28.211643   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:28.485129   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:28.485192   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:28.711934   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:28.981378   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:28.981648   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:29.211856   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:29.479554   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:29.482010   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:29.711786   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:29.979279   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:29.981418   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:30.211000   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:30.893295   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:30.894769   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:30.894901   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:30.984114   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:30.987110   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:31.210979   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:31.478624   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:31.481528   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:31.712617   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:31.979380   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:31.984180   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:32.213603   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:32.479081   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:32.481924   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:32.715103   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:32.985010   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:32.986128   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:33.212131   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:33.478316   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:33.480866   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:33.718248   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:33.979361   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:33.982172   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:34.212318   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:34.477735   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:34.480173   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:34.711862   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:35.115693   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:35.117079   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:35.212099   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:35.480281   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:35.481979   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:35.717482   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:35.982870   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:35.984774   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:36.211315   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:36.479180   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:36.481099   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:36.714470   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:37.267041   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:37.267388   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:37.267868   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:37.479766   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:37.481784   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:37.711957   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:37.980775   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:37.987169   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:38.212310   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:38.481514   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:38.483922   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:38.711601   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:38.984393   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:38.984713   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:39.211641   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:39.479832   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:39.481896   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:39.710848   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:39.979411   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:39.981880   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:40.210897   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:40.479300   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:40.483786   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:40.711707   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:40.981679   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:40.986097   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:41.218305   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:41.478832   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:41.482737   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:41.711341   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:41.978847   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:41.985825   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:42.211312   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:42.479324   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:42.480587   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:42.712987   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:42.978530   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:42.981097   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:43.211464   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:43.769016   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:43.769687   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:43.770090   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:43.981726   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:43.981907   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:44.212083   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:44.481356   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:44.481573   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:44.710948   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:44.980238   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:44.981869   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:45.211549   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:45.479337   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:45.480925   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:45.711511   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:45.984069   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:45.984683   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:46.212584   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:46.480033   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:46.481453   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:46.712931   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:46.978934   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:46.981133   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:47.213023   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:47.480170   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:47.482057   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:47.711053   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:47.982155   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:47.982294   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:48.211734   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:48.478809   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:48.481093   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:48.711652   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:48.980730   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:48.982901   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:49.211337   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:49.478476   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:49.480936   15646 kapi.go:107] duration metric: took 56.004438949s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1014 13:40:49.712012   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:49.979658   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:50.211831   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:50.479051   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:50.711987   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:50.978857   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:51.211699   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:51.480549   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:51.711830   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:51.979513   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:52.211432   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:52.478805   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:52.711486   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:52.978571   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:53.211626   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:53.479364   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:53.712268   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:53.978935   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:54.211762   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:54.479785   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:54.711538   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:54.978635   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:55.211320   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:55.478282   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:55.710475   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:55.979124   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:56.212135   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:56.761865   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:56.762715   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:56.979664   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:57.212590   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:57.478572   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:57.711184   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:57.978833   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:58.211916   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:58.479601   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:58.711242   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:58.978449   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:59.210610   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:59.478965   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:59.711442   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:59.978389   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:00.211864   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:00.478906   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:00.711503   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:00.978477   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:01.211247   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:01.478685   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:01.713698   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:01.979942   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:02.211284   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:02.478528   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:02.712225   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:02.978513   15646 kapi.go:107] duration metric: took 1m12.004438023s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1014 13:41:03.210854   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:04.097307   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:04.211572   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:04.714767   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:05.211623   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:05.712077   15646 kapi.go:107] duration metric: took 1m11.004439313s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1014 13:41:05.714109   15646 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-313496 cluster.
	I1014 13:41:05.715590   15646 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1014 13:41:05.716887   15646 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1014 13:41:05.718242   15646 out.go:177] * Enabled addons: cloud-spanner, default-storageclass, storage-provisioner, amd-gpu-device-plugin, ingress-dns, nvidia-device-plugin, metrics-server, yakd, inspektor-gadget, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1014 13:41:05.719353   15646 addons.go:510] duration metric: took 1m23.515638899s for enable addons: enabled=[cloud-spanner default-storageclass storage-provisioner amd-gpu-device-plugin ingress-dns nvidia-device-plugin metrics-server yakd inspektor-gadget storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1014 13:41:05.719396   15646 start.go:246] waiting for cluster config update ...
	I1014 13:41:05.719414   15646 start.go:255] writing updated cluster config ...
	I1014 13:41:05.719653   15646 ssh_runner.go:195] Run: rm -f paused
	I1014 13:41:05.769889   15646 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1014 13:41:05.771798   15646 out.go:177] * Done! kubectl is now configured to use "addons-313496" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 14 13:44:22 addons-313496 crio[664]: time="2024-10-14 13:44:22.189134506Z" level=debug msg="exporting opaque data as blob \"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30\"" file="storage/storage_src.go:115"
	Oct 14 13:44:22 addons-313496 crio[664]: time="2024-10-14 13:44:22.191460164Z" level=debug msg="Created container \"f4275239fa5deb9d923557818a80b5ac4db65d511c9aac19dced2c430255a102\"" file="storage/runtime.go:241"
	Oct 14 13:44:22 addons-313496 crio[664]: time="2024-10-14 13:44:22.192122886Z" level=debug msg="Container \"f4275239fa5deb9d923557818a80b5ac4db65d511c9aac19dced2c430255a102\" has work directory \"/var/lib/containers/storage/overlay-containers/f4275239fa5deb9d923557818a80b5ac4db65d511c9aac19dced2c430255a102/userdata\"" file="storage/runtime.go:276"
	Oct 14 13:44:22 addons-313496 crio[664]: time="2024-10-14 13:44:22.192225470Z" level=debug msg="Container \"f4275239fa5deb9d923557818a80b5ac4db65d511c9aac19dced2c430255a102\" has run directory \"/var/run/containers/storage/overlay-containers/f4275239fa5deb9d923557818a80b5ac4db65d511c9aac19dced2c430255a102/userdata\"" file="storage/runtime.go:286"
	Oct 14 13:44:22 addons-313496 crio[664]: time="2024-10-14 13:44:22.192293177Z" level=debug msg="Setting stage for resource k8s_hello-world-app_hello-world-app-55bf9c44b4-qln9q_default_bbceeaae-a919-4e5e-add2-814748d5c2b5_0 from container storage creation to container volume configuration" file="resourcestore/resourcestore.go:227" id=6e21eb72-c28c-4bc2-8aa4-8680ac9a8035 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 13:44:22 addons-313496 crio[664]: time="2024-10-14 13:44:22.193045501Z" level=debug msg="Skipping relabel for /var/lib/kubelet/pods/bbceeaae-a919-4e5e-add2-814748d5c2b5/etc-hosts because kubelet did not request it" file="server/container_create_linux.go:1082" id=6e21eb72-c28c-4bc2-8aa4-8680ac9a8035 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 13:44:22 addons-313496 crio[664]: time="2024-10-14 13:44:22.193111347Z" level=debug msg="Skipping relabel for /var/lib/kubelet/pods/bbceeaae-a919-4e5e-add2-814748d5c2b5/containers/hello-world-app/b7f62bba because kubelet did not request it" file="server/container_create_linux.go:1082" id=6e21eb72-c28c-4bc2-8aa4-8680ac9a8035 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 13:44:22 addons-313496 crio[664]: time="2024-10-14 13:44:22.193147756Z" level=debug msg="Skipping relabel for /var/lib/kubelet/pods/bbceeaae-a919-4e5e-add2-814748d5c2b5/volumes/kubernetes.io~projected/kube-api-access-bgfzx because kubelet did not request it" file="server/container_create_linux.go:1082" id=6e21eb72-c28c-4bc2-8aa4-8680ac9a8035 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 13:44:22 addons-313496 crio[664]: time="2024-10-14 13:44:22.193179391Z" level=debug msg="Setting stage for resource k8s_hello-world-app_hello-world-app-55bf9c44b4-qln9q_default_bbceeaae-a919-4e5e-add2-814748d5c2b5_0 from container volume configuration to container device creation" file="resourcestore/resourcestore.go:227" id=6e21eb72-c28c-4bc2-8aa4-8680ac9a8035 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 13:44:22 addons-313496 crio[664]: time="2024-10-14 13:44:22.193237826Z" level=debug msg="Setting stage for resource k8s_hello-world-app_hello-world-app-55bf9c44b4-qln9q_default_bbceeaae-a919-4e5e-add2-814748d5c2b5_0 from container device creation to container storage start" file="resourcestore/resourcestore.go:227" id=6e21eb72-c28c-4bc2-8aa4-8680ac9a8035 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 13:44:22 addons-313496 crio[664]: time="2024-10-14 13:44:22.193406275Z" level=debug msg="overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/l/3PBYP7DXD2KC3COE5WQJECCBOC,upperdir=/var/lib/containers/storage/overlay/9c76faa10557bc771b104513f47467a2d249b9567565df9c80200a6a33c0fd47/diff,workdir=/var/lib/containers/storage/overlay/9c76faa10557bc771b104513f47467a2d249b9567565df9c80200a6a33c0fd47/work,nodev,metacopy=on,volatile" file="overlay/overlay.go:1834"
	Oct 14 13:44:22 addons-313496 crio[664]: time="2024-10-14 13:44:22.193962286Z" level=debug msg="Mounted container \"f4275239fa5deb9d923557818a80b5ac4db65d511c9aac19dced2c430255a102\" at \"/var/lib/containers/storage/overlay/9c76faa10557bc771b104513f47467a2d249b9567565df9c80200a6a33c0fd47/merged\"" file="storage/runtime.go:464"
	Oct 14 13:44:22 addons-313496 crio[664]: time="2024-10-14 13:44:22.194023168Z" level=debug msg="Setting stage for resource k8s_hello-world-app_hello-world-app-55bf9c44b4-qln9q_default_bbceeaae-a919-4e5e-add2-814748d5c2b5_0 from container storage start to container spec configuration" file="resourcestore/resourcestore.go:227" id=6e21eb72-c28c-4bc2-8aa4-8680ac9a8035 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 13:44:22 addons-313496 crio[664]: time="2024-10-14 13:44:22.194059196Z" level=debug msg="Setting container's log_path = /var/log/pods/default_hello-world-app-55bf9c44b4-qln9q_bbceeaae-a919-4e5e-add2-814748d5c2b5, sbox.logdir = hello-world-app/0.log, ctr.logfile = /var/log/pods/default_hello-world-app-55bf9c44b4-qln9q_bbceeaae-a919-4e5e-add2-814748d5c2b5/hello-world-app/0.log" file="container/container.go:453"
	Oct 14 13:44:22 addons-313496 crio[664]: time="2024-10-14 13:44:22.194243915Z" level=debug msg="Setup seccomp from profile field: &SecurityProfile{ProfileType:Unconfined,LocalhostRef:,}" file="seccomp/seccomp.go:188" id=6e21eb72-c28c-4bc2-8aa4-8680ac9a8035 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 13:44:22 addons-313496 crio[664]: time="2024-10-14 13:44:22.194286490Z" level=debug msg="Setting container's log_path = /var/log/pods/default_hello-world-app-55bf9c44b4-qln9q_bbceeaae-a919-4e5e-add2-814748d5c2b5, sbox.logdir = hello-world-app/0.log, ctr.logfile = /var/log/pods/default_hello-world-app-55bf9c44b4-qln9q_bbceeaae-a919-4e5e-add2-814748d5c2b5/hello-world-app/0.log" file="container/container.go:453"
	Oct 14 13:44:22 addons-313496 crio[664]: time="2024-10-14 13:44:22.194379507Z" level=debug msg="CONTAINER USER: 0" file="server/container_create.go:223" id=6e21eb72-c28c-4bc2-8aa4-8680ac9a8035 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 13:44:22 addons-313496 crio[664]: time="2024-10-14 13:44:22.194432502Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/9c76faa10557bc771b104513f47467a2d249b9567565df9c80200a6a33c0fd47/merged/etc/passwd: no such file or directory" file="utils/utils.go:170"
	Oct 14 13:44:22 addons-313496 crio[664]: time="2024-10-14 13:44:22.194465694Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/9c76faa10557bc771b104513f47467a2d249b9567565df9c80200a6a33c0fd47/merged/etc/group: no such file or directory" file="utils/utils.go:177"
	Oct 14 13:44:22 addons-313496 crio[664]: time="2024-10-14 13:44:22.194539229Z" level=debug msg="/etc/system-fips does not exist on host, not mounting FIPS mode subscription" file="subscriptions/subscriptions.go:207"
	Oct 14 13:44:22 addons-313496 crio[664]: time="2024-10-14 13:44:22.195100045Z" level=debug msg="Setting stage for resource k8s_hello-world-app_hello-world-app-55bf9c44b4-qln9q_default_bbceeaae-a919-4e5e-add2-814748d5c2b5_0 from container spec configuration to container runtime creation" file="resourcestore/resourcestore.go:227" id=6e21eb72-c28c-4bc2-8aa4-8680ac9a8035 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 13:44:22 addons-313496 crio[664]: time="2024-10-14 13:44:22.195168831Z" level=debug msg="running conmon: /usr/libexec/crio/conmon" args="[-b /var/run/containers/storage/overlay-containers/f4275239fa5deb9d923557818a80b5ac4db65d511c9aac19dced2c430255a102/userdata -c f4275239fa5deb9d923557818a80b5ac4db65d511c9aac19dced2c430255a102 --exit-dir /var/run/crio/exits -l /var/log/pods/default_hello-world-app-55bf9c44b4-qln9q_bbceeaae-a919-4e5e-add2-814748d5c2b5/hello-world-app/0.log --log-level debug -n k8s_hello-world-app_hello-world-app-55bf9c44b4-qln9q_default_bbceeaae-a919-4e5e-add2-814748d5c2b5_0 -P /var/run/containers/storage/overlay-containers/f4275239fa5deb9d923557818a80b5ac4db65d511c9aac19dced2c430255a102/userdata/conmon-pidfile -p /var/run/containers/storage/overlay-containers/f4275239fa5deb9d923557818a80b5ac4db65d511c9aac19dced2c430255a102/userdata/pidfile --persist-dir /var/lib/containers/storage/overlay-containers/f4275239fa5deb9d923557818a80b5ac4db65d511c9aac19dced2c430255a102/userdata -r
/usr/bin/runc --runtime-arg --root=/run/runc --socket-dir-path /var/run/crio --syslog -u f4275239fa5deb9d923557818a80b5ac4db65d511c9aac19dced2c430255a102]" file="oci/runtime_oci.go:168" id=6e21eb72-c28c-4bc2-8aa4-8680ac9a8035 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 14 13:44:22 addons-313496 conmon[11071]: conmon f4275239fa5deb9d9235 <ndebug>: addr{sun_family=AF_UNIX, sun_path=/proc/self/fd/12/attach}
	Oct 14 13:44:22 addons-313496 conmon[11071]: conmon f4275239fa5deb9d9235 <ndebug>: terminal_ctrl_fd: 12
	Oct 14 13:44:22 addons-313496 conmon[11071]: conmon f4275239fa5deb9d9235 <ndebug>: winsz read side: 16, winsz write side: 16
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	f4275239fa5de       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Created             hello-world-app           0                   bcff4297578ca       hello-world-app-55bf9c44b4-qln9q
	f983b17f3b58c       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                              2 minutes ago            Running             nginx                     0                   fbb7032f79d11       nginx
	ac8e2b68a2526       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago            Running             busybox                   0                   eb4c935e951be       busybox
	1195f9a8df4e9       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago            Running             controller                0                   8ef4c329dd160       ingress-nginx-controller-5f85ff4588-xxf5h
	f21cbb9cbb24e       a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb                                                             3 minutes ago            Exited              patch                     1                   5f7e8c6ce8754       ingress-nginx-admission-patch-b6k5f
	a2a9ca25a2030       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago            Exited              create                    0                   48430eea5bc1a       ingress-nginx-admission-create-pnp6s
	fcdc01415b8f4       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        4 minutes ago            Running             metrics-server            0                   f3521b0c24d46       metrics-server-84c5f94fbc-cggcl
	a5ebda403a908       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             4 minutes ago            Running             minikube-ingress-dns      0                   7b67387e5e327       kube-ingress-dns-minikube
	459e3a06aa537       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago            Running             amd-gpu-device-plugin     0                   f5cc832ca4671       amd-gpu-device-plugin-m9mtz
	65a4291d6c524       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago            Running             storage-provisioner       0                   492b76f229527       storage-provisioner
	616361d8d4378       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             4 minutes ago            Running             coredns                   0                   498ead475996e       coredns-7c65d6cfc9-69r77
	9d61ff151a442       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             4 minutes ago            Running             kube-proxy                0                   462d804a2d40d       kube-proxy-7zvnt
	000c7b368fb0c       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             4 minutes ago            Running             kube-scheduler            0                   e2a9cfb1ac818       kube-scheduler-addons-313496
	340c28a59e7bb       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             4 minutes ago            Running             kube-apiserver            0                   da5dcdd6ab62e       kube-apiserver-addons-313496
	fd2a8e9b921aa       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             4 minutes ago            Running             etcd                      0                   87f5cf26c9d0f       etcd-addons-313496
	04882c9038813       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             4 minutes ago            Running             kube-controller-manager   0                   8ae6efa4db7d1       kube-controller-manager-addons-313496
	
	
	==> coredns [616361d8d4378804e957eb4b6028aa2b7a1f4a55fa64d33c24b4320f1c5a8039] <==
	[INFO] 10.244.0.8:33411 - 14353 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000174572s
	[INFO] 10.244.0.8:33411 - 45228 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000160576s
	[INFO] 10.244.0.8:33411 - 61497 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000193148s
	[INFO] 10.244.0.8:33411 - 8298 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00007029s
	[INFO] 10.244.0.8:33411 - 63106 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00006458s
	[INFO] 10.244.0.8:33411 - 47593 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000076648s
	[INFO] 10.244.0.8:33411 - 53138 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000069408s
	[INFO] 10.244.0.8:34855 - 22545 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000110356s
	[INFO] 10.244.0.8:34855 - 22244 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000067216s
	[INFO] 10.244.0.8:41867 - 54549 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000101174s
	[INFO] 10.244.0.8:41867 - 55027 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000051402s
	[INFO] 10.244.0.8:41142 - 22868 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000058574s
	[INFO] 10.244.0.8:41142 - 22625 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000033959s
	[INFO] 10.244.0.8:32950 - 30392 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000113043s
	[INFO] 10.244.0.8:32950 - 30561 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000069018s
	[INFO] 10.244.0.23:47494 - 10641 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000655501s
	[INFO] 10.244.0.23:49634 - 32660 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000116772s
	[INFO] 10.244.0.23:60304 - 62999 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000120066s
	[INFO] 10.244.0.23:36956 - 54367 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000073873s
	[INFO] 10.244.0.23:47826 - 23492 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000084231s
	[INFO] 10.244.0.23:45861 - 18091 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00007003s
	[INFO] 10.244.0.23:56469 - 47196 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00107016s
	[INFO] 10.244.0.23:32946 - 12149 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 458 0.001855443s
	[INFO] 10.244.0.28:34927 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000265723s
	[INFO] 10.244.0.28:34828 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000255755s
	
	
	==> describe nodes <==
	Name:               addons-313496
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-313496
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=addons-313496
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_14T13_39_37_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-313496
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 13:39:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-313496
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 13:44:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 13:42:41 +0000   Mon, 14 Oct 2024 13:39:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 13:42:41 +0000   Mon, 14 Oct 2024 13:39:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 13:42:41 +0000   Mon, 14 Oct 2024 13:39:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 13:42:41 +0000   Mon, 14 Oct 2024 13:39:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.177
	  Hostname:    addons-313496
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 293dfa51674b4a789ea1e2204c6437a9
	  System UUID:                293dfa51-674b-4a78-9ea1-e2204c6437a9
	  Boot ID:                    75724930-219f-4ba1-a96c-8f16884c2e8f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m16s
	  default                     hello-world-app-55bf9c44b4-qln9q             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-xxf5h    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m32s
	  kube-system                 amd-gpu-device-plugin-m9mtz                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  kube-system                 coredns-7c65d6cfc9-69r77                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m40s
	  kube-system                 etcd-addons-313496                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         4m45s
	  kube-system                 kube-apiserver-addons-313496                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 kube-controller-manager-addons-313496        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m36s
	  kube-system                 kube-proxy-7zvnt                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 kube-scheduler-addons-313496                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 metrics-server-84c5f94fbc-cggcl              100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         4m34s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m38s  kube-proxy       
	  Normal  Starting                 4m45s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m45s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m45s  kubelet          Node addons-313496 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m45s  kubelet          Node addons-313496 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m45s  kubelet          Node addons-313496 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m44s  kubelet          Node addons-313496 status is now: NodeReady
	  Normal  RegisteredNode           4m41s  node-controller  Node addons-313496 event: Registered Node addons-313496 in Controller
	
	
	==> dmesg <==
	[  +6.476801] systemd-fstab-generator[1203]: Ignoring "noauto" option for root device
	[  +0.091241] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.326380] systemd-fstab-generator[1341]: Ignoring "noauto" option for root device
	[  +0.176653] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.075864] kauditd_printk_skb: 115 callbacks suppressed
	[  +5.000896] kauditd_printk_skb: 133 callbacks suppressed
	[  +6.173920] kauditd_printk_skb: 89 callbacks suppressed
	[Oct14 13:40] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.016490] kauditd_printk_skb: 2 callbacks suppressed
	[ +10.499188] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.430837] kauditd_printk_skb: 34 callbacks suppressed
	[  +6.011973] kauditd_printk_skb: 28 callbacks suppressed
	[ +12.158324] kauditd_printk_skb: 3 callbacks suppressed
	[Oct14 13:41] kauditd_printk_skb: 16 callbacks suppressed
	[  +9.491800] kauditd_printk_skb: 9 callbacks suppressed
	[ +13.776408] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.404069] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.670842] kauditd_printk_skb: 40 callbacks suppressed
	[  +6.004383] kauditd_printk_skb: 63 callbacks suppressed
	[  +6.962880] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.545232] kauditd_printk_skb: 3 callbacks suppressed
	[Oct14 13:42] kauditd_printk_skb: 25 callbacks suppressed
	[ +16.195774] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.875442] kauditd_printk_skb: 7 callbacks suppressed
	[Oct14 13:44] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [fd2a8e9b921aae625ee640bf6c996300e93556e2e3515bcb8c001b5575f0e96e] <==
	{"level":"info","ts":"2024-10-14T13:41:04.081912Z","caller":"traceutil/trace.go:171","msg":"trace[212189526] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1110; }","duration":"314.960895ms","start":"2024-10-14T13:41:03.766945Z","end":"2024-10-14T13:41:04.081906Z","steps":["trace[212189526] 'range keys from in-memory index tree'  (duration: 313.644148ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T13:41:04.080828Z","caller":"traceutil/trace.go:171","msg":"trace[1059839605] linearizableReadLoop","detail":"{readStateIndex:1142; appliedIndex:1141; }","duration":"334.408737ms","start":"2024-10-14T13:41:03.746408Z","end":"2024-10-14T13:41:04.080817Z","steps":["trace[1059839605] 'read index received'  (duration: 276.946998ms)","trace[1059839605] 'applied index is now lower than readState.Index'  (duration: 57.461315ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-14T13:41:04.080928Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"334.516743ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-14T13:41:04.081985Z","caller":"traceutil/trace.go:171","msg":"trace[63410484] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1111; }","duration":"335.577994ms","start":"2024-10-14T13:41:03.746402Z","end":"2024-10-14T13:41:04.081980Z","steps":["trace[63410484] 'agreement among raft nodes before linearized reading'  (duration: 334.471671ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T13:41:04.082041Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-14T13:41:03.746367Z","time spent":"335.662492ms","remote":"127.0.0.1:34300","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-10-14T13:41:04.082170Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"310.999478ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-14T13:41:04.082205Z","caller":"traceutil/trace.go:171","msg":"trace[1149194388] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1112; }","duration":"311.035075ms","start":"2024-10-14T13:41:03.771165Z","end":"2024-10-14T13:41:04.082200Z","steps":["trace[1149194388] 'agreement among raft nodes before linearized reading'  (duration: 310.987624ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T13:41:04.082221Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-14T13:41:03.771133Z","time spent":"311.084458ms","remote":"127.0.0.1:34122","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-10-14T13:41:04.081096Z","caller":"traceutil/trace.go:171","msg":"trace[1341300446] transaction","detail":"{read_only:false; response_revision:1111; number_of_response:1; }","duration":"334.804484ms","start":"2024-10-14T13:41:03.746226Z","end":"2024-10-14T13:41:04.081030Z","steps":["trace[1341300446] 'process raft request'  (duration: 277.12123ms)","trace[1341300446] 'compare'  (duration: 56.924346ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-14T13:41:04.082712Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"202.571604ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-10-14T13:41:04.082759Z","caller":"traceutil/trace.go:171","msg":"trace[1495775215] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1112; }","duration":"202.620784ms","start":"2024-10-14T13:41:03.880131Z","end":"2024-10-14T13:41:04.082752Z","steps":["trace[1495775215] 'agreement among raft nodes before linearized reading'  (duration: 202.342253ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T13:41:04.083026Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"213.261453ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-14T13:41:04.083066Z","caller":"traceutil/trace.go:171","msg":"trace[688169331] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1112; }","duration":"213.302181ms","start":"2024-10-14T13:41:03.869758Z","end":"2024-10-14T13:41:04.083060Z","steps":["trace[688169331] 'agreement among raft nodes before linearized reading'  (duration: 213.252729ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T13:41:04.083579Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-14T13:41:03.746210Z","time spent":"336.106762ms","remote":"127.0.0.1:34394","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":486,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" mod_revision:1110 > success:<request_put:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" value_size:427 >> failure:<request_range:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" > >"}
	{"level":"info","ts":"2024-10-14T13:41:04.081151Z","caller":"traceutil/trace.go:171","msg":"trace[738532267] transaction","detail":"{read_only:false; response_revision:1112; number_of_response:1; }","duration":"318.155325ms","start":"2024-10-14T13:41:03.762987Z","end":"2024-10-14T13:41:04.081142Z","steps":["trace[738532267] 'process raft request'  (duration: 317.796221ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T13:41:04.083932Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-14T13:41:03.762971Z","time spent":"320.855684ms","remote":"127.0.0.1:34196","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":782,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/ingress-nginx/ingress-nginx-controller-5f85ff4588-xxf5h.17fe557405bf9b0d\" mod_revision:0 > success:<request_put:<key:\"/registry/events/ingress-nginx/ingress-nginx-controller-5f85ff4588-xxf5h.17fe557405bf9b0d\" value_size:675 lease:8080181808251469247 >> failure:<>"}
	{"level":"info","ts":"2024-10-14T13:41:35.967283Z","caller":"traceutil/trace.go:171","msg":"trace[539375714] linearizableReadLoop","detail":"{readStateIndex:1328; appliedIndex:1327; }","duration":"199.901265ms","start":"2024-10-14T13:41:35.767369Z","end":"2024-10-14T13:41:35.967270Z","steps":["trace[539375714] 'read index received'  (duration: 199.745848ms)","trace[539375714] 'applied index is now lower than readState.Index'  (duration: 155.013µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-14T13:41:35.967494Z","caller":"traceutil/trace.go:171","msg":"trace[1271355131] transaction","detail":"{read_only:false; response_revision:1291; number_of_response:1; }","duration":"369.274456ms","start":"2024-10-14T13:41:35.598211Z","end":"2024-10-14T13:41:35.967485Z","steps":["trace[1271355131] 'process raft request'  (duration: 368.944168ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T13:41:35.967572Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-14T13:41:35.598197Z","time spent":"369.325745ms","remote":"127.0.0.1:34282","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1287 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-10-14T13:41:35.967752Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"200.380393ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-10-14T13:41:35.968813Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.222397ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-14T13:41:35.968847Z","caller":"traceutil/trace.go:171","msg":"trace[1352015729] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1291; }","duration":"101.26558ms","start":"2024-10-14T13:41:35.867573Z","end":"2024-10-14T13:41:35.968839Z","steps":["trace[1352015729] 'agreement among raft nodes before linearized reading'  (duration: 101.164144ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T13:41:35.969105Z","caller":"traceutil/trace.go:171","msg":"trace[38771890] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1291; }","duration":"201.733561ms","start":"2024-10-14T13:41:35.767363Z","end":"2024-10-14T13:41:35.969096Z","steps":["trace[38771890] 'agreement among raft nodes before linearized reading'  (duration: 200.367001ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T13:42:06.637050Z","caller":"traceutil/trace.go:171","msg":"trace[237475859] transaction","detail":"{read_only:false; response_revision:1581; number_of_response:1; }","duration":"166.724719ms","start":"2024-10-14T13:42:06.470301Z","end":"2024-10-14T13:42:06.637026Z","steps":["trace[237475859] 'process raft request'  (duration: 166.587294ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T13:42:44.580165Z","caller":"traceutil/trace.go:171","msg":"trace[1709847578] transaction","detail":"{read_only:false; response_revision:1796; number_of_response:1; }","duration":"194.608499ms","start":"2024-10-14T13:42:44.385528Z","end":"2024-10-14T13:42:44.580137Z","steps":["trace[1709847578] 'process raft request'  (duration: 194.43891ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:44:22 up 5 min,  0 users,  load average: 0.94, 1.22, 0.65
	Linux addons-313496 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [340c28a59e7bb3430fe29720dfde756e460c4bcea8862296fe9665759230f850] <==
	 > logger="UnhandledError"
	E1014 13:41:28.823164       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.220.222:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.220.222:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.220.222:443: connect: connection refused" logger="UnhandledError"
	E1014 13:41:28.829081       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.220.222:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.220.222:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.220.222:443: connect: connection refused" logger="UnhandledError"
	I1014 13:41:28.893723       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1014 13:41:31.311995       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.23.145"}
	I1014 13:41:54.674308       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1014 13:41:55.799257       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1014 13:42:00.212091       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1014 13:42:00.409965       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.226.119"}
	E1014 13:42:01.940023       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1014 13:42:14.892582       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1014 13:42:33.792996       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1014 13:42:33.793081       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1014 13:42:33.867101       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1014 13:42:33.867198       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1014 13:42:33.915907       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1014 13:42:33.916018       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1014 13:42:33.946196       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1014 13:42:33.946286       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1014 13:42:33.971503       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1014 13:42:33.971552       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1014 13:42:34.948418       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1014 13:42:34.972457       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1014 13:42:35.093897       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1014 13:44:20.903281       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.219.121"}
	
	
	==> kube-controller-manager [04882c90388135e7a0ca7695b407a07f1bd0c7b335ab40d90edc9c65f61e824d] <==
	E1014 13:42:54.145359       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1014 13:42:55.630261       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1014 13:42:55.630314       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1014 13:43:12.376024       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1014 13:43:12.376083       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1014 13:43:12.896142       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1014 13:43:12.896249       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1014 13:43:15.864781       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1014 13:43:15.864876       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1014 13:43:17.588669       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1014 13:43:17.588774       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1014 13:43:50.763549       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1014 13:43:50.763863       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1014 13:43:51.350471       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1014 13:43:51.350581       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1014 13:43:53.820373       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1014 13:43:53.820456       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1014 13:43:57.258728       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1014 13:43:57.258922       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1014 13:44:20.702501       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="44.996997ms"
	I1014 13:44:20.713309       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="10.564365ms"
	I1014 13:44:20.713668       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="63.67µs"
	I1014 13:44:20.721923       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="192.813µs"
	I1014 13:44:22.529948       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="17.212775ms"
	I1014 13:44:22.530009       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="36.628µs"
	
	
	==> kube-proxy [9d61ff151a442fc51b211a8fee95c81ae65ea90e27704e2a58afae2cf5b6d965] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1014 13:39:44.152595       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1014 13:39:44.164296       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.177"]
	E1014 13:39:44.164383       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 13:39:44.273083       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1014 13:39:44.273127       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 13:39:44.273150       1 server_linux.go:169] "Using iptables Proxier"
	I1014 13:39:44.276357       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 13:39:44.276697       1 server.go:483] "Version info" version="v1.31.1"
	I1014 13:39:44.276710       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 13:39:44.278437       1 config.go:199] "Starting service config controller"
	I1014 13:39:44.278449       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 13:39:44.278464       1 config.go:105] "Starting endpoint slice config controller"
	I1014 13:39:44.278468       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 13:39:44.278934       1 config.go:328] "Starting node config controller"
	I1014 13:39:44.278940       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 13:39:44.379213       1 shared_informer.go:320] Caches are synced for node config
	I1014 13:39:44.379242       1 shared_informer.go:320] Caches are synced for service config
	I1014 13:39:44.379269       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [000c7b368fb0c82a3afd37bffaa28fb1bcb88ca467dacf69ea3fcbe6feb37a89] <==
	W1014 13:39:34.683172       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1014 13:39:34.683602       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:34.693268       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1014 13:39:34.693403       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1014 13:39:35.576296       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1014 13:39:35.576387       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:35.583227       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1014 13:39:35.583271       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:35.605556       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1014 13:39:35.605594       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:35.655265       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1014 13:39:35.655319       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1014 13:39:35.715388       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1014 13:39:35.715443       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:35.798706       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1014 13:39:35.798757       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:35.887442       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1014 13:39:35.887492       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:35.895907       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1014 13:39:35.896061       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:35.910043       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1014 13:39:35.910143       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:35.939843       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1014 13:39:35.939959       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 13:39:38.063992       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 14 13:44:17 addons-313496 kubelet[1210]: E1014 13:44:17.544766    1210 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913457544060797,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587583,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:44:17 addons-313496 kubelet[1210]: E1014 13:44:17.545215    1210 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913457544060797,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:587583,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:44:20 addons-313496 kubelet[1210]: E1014 13:44:20.692343    1210 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d664c078-5d63-4a85-af0e-797d001ec728" containerName="csi-resizer"
	Oct 14 13:44:20 addons-313496 kubelet[1210]: E1014 13:44:20.692856    1210 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f0796f57-a38e-4662-b0db-f8717051d902" containerName="csi-external-health-monitor-controller"
	Oct 14 13:44:20 addons-313496 kubelet[1210]: E1014 13:44:20.692961    1210 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f0796f57-a38e-4662-b0db-f8717051d902" containerName="csi-provisioner"
	Oct 14 13:44:20 addons-313496 kubelet[1210]: E1014 13:44:20.693000    1210 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f0796f57-a38e-4662-b0db-f8717051d902" containerName="hostpath"
	Oct 14 13:44:20 addons-313496 kubelet[1210]: E1014 13:44:20.693093    1210 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="35914d73-1e05-4cb8-a4a9-ef439861030f" containerName="csi-attacher"
	Oct 14 13:44:20 addons-313496 kubelet[1210]: E1014 13:44:20.693126    1210 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f0796f57-a38e-4662-b0db-f8717051d902" containerName="node-driver-registrar"
	Oct 14 13:44:20 addons-313496 kubelet[1210]: E1014 13:44:20.693214    1210 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c91f9671-b7dc-43c9-b0f2-347714aec2ba" containerName="volume-snapshot-controller"
	Oct 14 13:44:20 addons-313496 kubelet[1210]: E1014 13:44:20.693345    1210 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f0796f57-a38e-4662-b0db-f8717051d902" containerName="csi-snapshotter"
	Oct 14 13:44:20 addons-313496 kubelet[1210]: E1014 13:44:20.693385    1210 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f0796f57-a38e-4662-b0db-f8717051d902" containerName="liveness-probe"
	Oct 14 13:44:20 addons-313496 kubelet[1210]: E1014 13:44:20.693469    1210 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="21c8fe85-b2e2-43dc-b232-90ece4febe2e" containerName="task-pv-container"
	Oct 14 13:44:20 addons-313496 kubelet[1210]: E1014 13:44:20.693500    1210 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ee08ee62-c76c-4fcf-947e-9dd882c3e072" containerName="volume-snapshot-controller"
	Oct 14 13:44:20 addons-313496 kubelet[1210]: I1014 13:44:20.693690    1210 memory_manager.go:354] "RemoveStaleState removing state" podUID="35914d73-1e05-4cb8-a4a9-ef439861030f" containerName="csi-attacher"
	Oct 14 13:44:20 addons-313496 kubelet[1210]: I1014 13:44:20.693782    1210 memory_manager.go:354] "RemoveStaleState removing state" podUID="c91f9671-b7dc-43c9-b0f2-347714aec2ba" containerName="volume-snapshot-controller"
	Oct 14 13:44:20 addons-313496 kubelet[1210]: I1014 13:44:20.693816    1210 memory_manager.go:354] "RemoveStaleState removing state" podUID="21c8fe85-b2e2-43dc-b232-90ece4febe2e" containerName="task-pv-container"
	Oct 14 13:44:20 addons-313496 kubelet[1210]: I1014 13:44:20.693909    1210 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0796f57-a38e-4662-b0db-f8717051d902" containerName="csi-provisioner"
	Oct 14 13:44:20 addons-313496 kubelet[1210]: I1014 13:44:20.693941    1210 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0796f57-a38e-4662-b0db-f8717051d902" containerName="csi-snapshotter"
	Oct 14 13:44:20 addons-313496 kubelet[1210]: I1014 13:44:20.694022    1210 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0796f57-a38e-4662-b0db-f8717051d902" containerName="hostpath"
	Oct 14 13:44:20 addons-313496 kubelet[1210]: I1014 13:44:20.694054    1210 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee08ee62-c76c-4fcf-947e-9dd882c3e072" containerName="volume-snapshot-controller"
	Oct 14 13:44:20 addons-313496 kubelet[1210]: I1014 13:44:20.694136    1210 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0796f57-a38e-4662-b0db-f8717051d902" containerName="csi-external-health-monitor-controller"
	Oct 14 13:44:20 addons-313496 kubelet[1210]: I1014 13:44:20.694167    1210 memory_manager.go:354] "RemoveStaleState removing state" podUID="d664c078-5d63-4a85-af0e-797d001ec728" containerName="csi-resizer"
	Oct 14 13:44:20 addons-313496 kubelet[1210]: I1014 13:44:20.694254    1210 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0796f57-a38e-4662-b0db-f8717051d902" containerName="node-driver-registrar"
	Oct 14 13:44:20 addons-313496 kubelet[1210]: I1014 13:44:20.694287    1210 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0796f57-a38e-4662-b0db-f8717051d902" containerName="liveness-probe"
	Oct 14 13:44:20 addons-313496 kubelet[1210]: I1014 13:44:20.751563    1210 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgfzx\" (UniqueName: \"kubernetes.io/projected/bbceeaae-a919-4e5e-add2-814748d5c2b5-kube-api-access-bgfzx\") pod \"hello-world-app-55bf9c44b4-qln9q\" (UID: \"bbceeaae-a919-4e5e-add2-814748d5c2b5\") " pod="default/hello-world-app-55bf9c44b4-qln9q"
	
	
	==> storage-provisioner [65a4291d6c524d1dde5edcb65a98eed8b24ee7acf960c9a24d17f36e05ce41e4] <==
	I1014 13:39:51.094784       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1014 13:39:51.177212       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1014 13:39:51.177285       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1014 13:39:51.425162       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1014 13:39:51.426958       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1cbdc0c7-ddb7-4724-aa78-342ddce41743", APIVersion:"v1", ResourceVersion:"705", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-313496_f7441b78-eea4-419b-b255-37d1b82027a5 became leader
	I1014 13:39:51.427005       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-313496_f7441b78-eea4-419b-b255-37d1b82027a5!
	I1014 13:39:51.533856       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-313496_f7441b78-eea4-419b-b255-37d1b82027a5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-313496 -n addons-313496
helpers_test.go:261: (dbg) Run:  kubectl --context addons-313496 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-pnp6s ingress-nginx-admission-patch-b6k5f
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-313496 describe pod ingress-nginx-admission-create-pnp6s ingress-nginx-admission-patch-b6k5f
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-313496 describe pod ingress-nginx-admission-create-pnp6s ingress-nginx-admission-patch-b6k5f: exit status 1 (58.164737ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-pnp6s" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-b6k5f" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-313496 describe pod ingress-nginx-admission-create-pnp6s ingress-nginx-admission-patch-b6k5f: exit status 1
addons_test.go:988: (dbg) Run:  out/minikube-linux-amd64 -p addons-313496 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-amd64 -p addons-313496 addons disable ingress-dns --alsologtostderr -v=1: (1.252392166s)
addons_test.go:988: (dbg) Run:  out/minikube-linux-amd64 -p addons-313496 addons disable ingress --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-amd64 -p addons-313496 addons disable ingress --alsologtostderr -v=1: (7.706181307s)
--- FAIL: TestAddons/parallel/Ingress (152.34s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (306.35s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 3.052804ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-cggcl" [33ed4d65-0bcf-4a12-beaf-298d4c5f2714] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00406272s
addons_test.go:402: (dbg) Run:  kubectl --context addons-313496 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-313496 top pods -n kube-system: exit status 1 (66.477302ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-m9mtz, age: 2m10.027803021s

                                                
                                                
** /stderr **
I1014 13:41:54.029883   15023 retry.go:31] will retry after 2.425848973s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-313496 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-313496 top pods -n kube-system: exit status 1 (62.92708ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-m9mtz, age: 2m12.517969771s

                                                
                                                
** /stderr **
I1014 13:41:56.519821   15023 retry.go:31] will retry after 3.236108316s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-313496 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-313496 top pods -n kube-system: exit status 1 (99.431123ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-m9mtz, age: 2m15.8537982s

                                                
                                                
** /stderr **
I1014 13:41:59.855895   15023 retry.go:31] will retry after 7.635380811s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-313496 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-313496 top pods -n kube-system: exit status 1 (73.935193ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-m9mtz, age: 2m23.565015796s

                                                
                                                
** /stderr **
I1014 13:42:07.566494   15023 retry.go:31] will retry after 7.255558757s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-313496 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-313496 top pods -n kube-system: exit status 1 (71.096486ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-m9mtz, age: 2m30.892363837s

                                                
                                                
** /stderr **
I1014 13:42:14.893759   15023 retry.go:31] will retry after 20.731205636s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-313496 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-313496 top pods -n kube-system: exit status 1 (70.348956ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-m9mtz, age: 2m51.69402697s

                                                
                                                
** /stderr **
I1014 13:42:35.695977   15023 retry.go:31] will retry after 19.999536277s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-313496 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-313496 top pods -n kube-system: exit status 1 (65.400839ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-m9mtz, age: 3m11.768524331s

                                                
                                                
** /stderr **
I1014 13:42:55.770141   15023 retry.go:31] will retry after 46.507363233s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-313496 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-313496 top pods -n kube-system: exit status 1 (68.259581ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-m9mtz, age: 3m58.349542366s

                                                
                                                
** /stderr **
I1014 13:43:42.351209   15023 retry.go:31] will retry after 1m11.063418184s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-313496 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-313496 top pods -n kube-system: exit status 1 (63.357277ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-m9mtz, age: 5m9.482876465s

                                                
                                                
** /stderr **
I1014 13:44:53.484731   15023 retry.go:31] will retry after 1m20.714721512s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-313496 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-313496 top pods -n kube-system: exit status 1 (62.099963ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-m9mtz, age: 6m30.2671068s

                                                
                                                
** /stderr **
I1014 13:46:14.269046   15023 retry.go:31] will retry after 37.405983394s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-313496 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-313496 top pods -n kube-system: exit status 1 (64.10771ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-m9mtz, age: 7m7.742273556s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-313496 -n addons-313496
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-313496 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-313496 logs -n 25: (1.169096368s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-520840                                                                     | download-only-520840 | jenkins | v1.34.0 | 14 Oct 24 13:38 UTC | 14 Oct 24 13:38 UTC |
	| delete  | -p download-only-882366                                                                     | download-only-882366 | jenkins | v1.34.0 | 14 Oct 24 13:38 UTC | 14 Oct 24 13:38 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-011047 | jenkins | v1.34.0 | 14 Oct 24 13:38 UTC |                     |
	|         | binary-mirror-011047                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:35043                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-011047                                                                     | binary-mirror-011047 | jenkins | v1.34.0 | 14 Oct 24 13:38 UTC | 14 Oct 24 13:38 UTC |
	| addons  | disable dashboard -p                                                                        | addons-313496        | jenkins | v1.34.0 | 14 Oct 24 13:38 UTC |                     |
	|         | addons-313496                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-313496        | jenkins | v1.34.0 | 14 Oct 24 13:38 UTC |                     |
	|         | addons-313496                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-313496 --wait=true                                                                | addons-313496        | jenkins | v1.34.0 | 14 Oct 24 13:38 UTC | 14 Oct 24 13:41 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-313496 addons disable                                                                | addons-313496        | jenkins | v1.34.0 | 14 Oct 24 13:41 UTC | 14 Oct 24 13:41 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-313496 addons disable                                                                | addons-313496        | jenkins | v1.34.0 | 14 Oct 24 13:41 UTC | 14 Oct 24 13:41 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-313496        | jenkins | v1.34.0 | 14 Oct 24 13:41 UTC | 14 Oct 24 13:41 UTC |
	|         | -p addons-313496                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-313496 addons disable                                                                | addons-313496        | jenkins | v1.34.0 | 14 Oct 24 13:41 UTC | 14 Oct 24 13:41 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-313496 addons                                                                        | addons-313496        | jenkins | v1.34.0 | 14 Oct 24 13:41 UTC | 14 Oct 24 13:41 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-313496 addons disable                                                                | addons-313496        | jenkins | v1.34.0 | 14 Oct 24 13:41 UTC | 14 Oct 24 13:41 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-313496 ssh cat                                                                       | addons-313496        | jenkins | v1.34.0 | 14 Oct 24 13:41 UTC | 14 Oct 24 13:41 UTC |
	|         | /opt/local-path-provisioner/pvc-c19f89aa-af99-4f45-994e-6760df4750a7_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-313496 addons disable                                                                | addons-313496        | jenkins | v1.34.0 | 14 Oct 24 13:41 UTC | 14 Oct 24 13:42 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-313496 addons                                                                        | addons-313496        | jenkins | v1.34.0 | 14 Oct 24 13:41 UTC | 14 Oct 24 13:41 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-313496 ip                                                                            | addons-313496        | jenkins | v1.34.0 | 14 Oct 24 13:41 UTC | 14 Oct 24 13:41 UTC |
	| addons  | addons-313496 addons disable                                                                | addons-313496        | jenkins | v1.34.0 | 14 Oct 24 13:41 UTC | 14 Oct 24 13:41 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-313496 addons                                                                        | addons-313496        | jenkins | v1.34.0 | 14 Oct 24 13:41 UTC | 14 Oct 24 13:41 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-313496 ssh curl -s                                                                   | addons-313496        | jenkins | v1.34.0 | 14 Oct 24 13:42 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-313496 addons                                                                        | addons-313496        | jenkins | v1.34.0 | 14 Oct 24 13:42 UTC | 14 Oct 24 13:42 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-313496 addons                                                                        | addons-313496        | jenkins | v1.34.0 | 14 Oct 24 13:42 UTC | 14 Oct 24 13:42 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-313496 ip                                                                            | addons-313496        | jenkins | v1.34.0 | 14 Oct 24 13:44 UTC | 14 Oct 24 13:44 UTC |
	| addons  | addons-313496 addons disable                                                                | addons-313496        | jenkins | v1.34.0 | 14 Oct 24 13:44 UTC | 14 Oct 24 13:44 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-313496 addons disable                                                                | addons-313496        | jenkins | v1.34.0 | 14 Oct 24 13:44 UTC | 14 Oct 24 13:44 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 13:38:51
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 13:38:51.387253   15646 out.go:345] Setting OutFile to fd 1 ...
	I1014 13:38:51.387350   15646 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:38:51.387360   15646 out.go:358] Setting ErrFile to fd 2...
	I1014 13:38:51.387366   15646 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:38:51.387583   15646 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
	I1014 13:38:51.388245   15646 out.go:352] Setting JSON to false
	I1014 13:38:51.389067   15646 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1281,"bootTime":1728911850,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 13:38:51.389156   15646 start.go:139] virtualization: kvm guest
	I1014 13:38:51.391309   15646 out.go:177] * [addons-313496] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1014 13:38:51.392581   15646 notify.go:220] Checking for updates...
	I1014 13:38:51.392598   15646 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 13:38:51.393881   15646 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 13:38:51.395260   15646 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 13:38:51.396475   15646 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 13:38:51.397637   15646 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 13:38:51.398722   15646 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 13:38:51.399941   15646 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 13:38:51.430268   15646 out.go:177] * Using the kvm2 driver based on user configuration
	I1014 13:38:51.431512   15646 start.go:297] selected driver: kvm2
	I1014 13:38:51.431526   15646 start.go:901] validating driver "kvm2" against <nil>
	I1014 13:38:51.431539   15646 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 13:38:51.432245   15646 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 13:38:51.432329   15646 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19790-7836/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 13:38:51.446115   15646 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1014 13:38:51.446145   15646 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 13:38:51.446362   15646 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 13:38:51.446392   15646 cni.go:84] Creating CNI manager for ""
	I1014 13:38:51.446430   15646 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 13:38:51.446440   15646 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1014 13:38:51.446484   15646 start.go:340] cluster config:
	{Name:addons-313496 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-313496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 13:38:51.446587   15646 iso.go:125] acquiring lock: {Name:mk2e2e780b05ead4007a93e6b56d28c6081926bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 13:38:51.448247   15646 out.go:177] * Starting "addons-313496" primary control-plane node in "addons-313496" cluster
	I1014 13:38:51.449451   15646 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 13:38:51.449473   15646 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1014 13:38:51.449481   15646 cache.go:56] Caching tarball of preloaded images
	I1014 13:38:51.449543   15646 preload.go:172] Found /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 13:38:51.449553   15646 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1014 13:38:51.449817   15646 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/config.json ...
	I1014 13:38:51.449834   15646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/config.json: {Name:mkf74f0baed126ca6fcf2a2185289a294d298977 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:38:51.449946   15646 start.go:360] acquireMachinesLock for addons-313496: {Name:mk0b928e3a7c606d1470b2064477a22c5e59969d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 13:38:51.449989   15646 start.go:364] duration metric: took 30.931µs to acquireMachinesLock for "addons-313496"
	I1014 13:38:51.450004   15646 start.go:93] Provisioning new machine with config: &{Name:addons-313496 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:addons-313496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 13:38:51.450050   15646 start.go:125] createHost starting for "" (driver="kvm2")
	I1014 13:38:51.452262   15646 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1014 13:38:51.452363   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:38:51.452392   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:38:51.465595   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33901
	I1014 13:38:51.466026   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:38:51.466579   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:38:51.466610   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:38:51.466969   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:38:51.467183   15646 main.go:141] libmachine: (addons-313496) Calling .GetMachineName
	I1014 13:38:51.467347   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:38:51.467473   15646 start.go:159] libmachine.API.Create for "addons-313496" (driver="kvm2")
	I1014 13:38:51.467505   15646 client.go:168] LocalClient.Create starting
	I1014 13:38:51.467549   15646 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem
	I1014 13:38:51.739836   15646 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem
	I1014 13:38:51.939968   15646 main.go:141] libmachine: Running pre-create checks...
	I1014 13:38:51.939994   15646 main.go:141] libmachine: (addons-313496) Calling .PreCreateCheck
	I1014 13:38:51.940434   15646 main.go:141] libmachine: (addons-313496) Calling .GetConfigRaw
	I1014 13:38:51.940814   15646 main.go:141] libmachine: Creating machine...
	I1014 13:38:51.940828   15646 main.go:141] libmachine: (addons-313496) Calling .Create
	I1014 13:38:51.940964   15646 main.go:141] libmachine: (addons-313496) Creating KVM machine...
	I1014 13:38:51.942221   15646 main.go:141] libmachine: (addons-313496) DBG | found existing default KVM network
	I1014 13:38:51.942986   15646 main.go:141] libmachine: (addons-313496) DBG | I1014 13:38:51.942841   15668 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ba0}
	I1014 13:38:51.943016   15646 main.go:141] libmachine: (addons-313496) DBG | created network xml: 
	I1014 13:38:51.943030   15646 main.go:141] libmachine: (addons-313496) DBG | <network>
	I1014 13:38:51.943041   15646 main.go:141] libmachine: (addons-313496) DBG |   <name>mk-addons-313496</name>
	I1014 13:38:51.943054   15646 main.go:141] libmachine: (addons-313496) DBG |   <dns enable='no'/>
	I1014 13:38:51.943064   15646 main.go:141] libmachine: (addons-313496) DBG |   
	I1014 13:38:51.943077   15646 main.go:141] libmachine: (addons-313496) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1014 13:38:51.943091   15646 main.go:141] libmachine: (addons-313496) DBG |     <dhcp>
	I1014 13:38:51.943104   15646 main.go:141] libmachine: (addons-313496) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1014 13:38:51.943115   15646 main.go:141] libmachine: (addons-313496) DBG |     </dhcp>
	I1014 13:38:51.943126   15646 main.go:141] libmachine: (addons-313496) DBG |   </ip>
	I1014 13:38:51.943135   15646 main.go:141] libmachine: (addons-313496) DBG |   
	I1014 13:38:51.943145   15646 main.go:141] libmachine: (addons-313496) DBG | </network>
	I1014 13:38:51.943154   15646 main.go:141] libmachine: (addons-313496) DBG | 
	I1014 13:38:51.948712   15646 main.go:141] libmachine: (addons-313496) DBG | trying to create private KVM network mk-addons-313496 192.168.39.0/24...
	I1014 13:38:52.014654   15646 main.go:141] libmachine: (addons-313496) DBG | private KVM network mk-addons-313496 192.168.39.0/24 created
	I1014 13:38:52.014683   15646 main.go:141] libmachine: (addons-313496) DBG | I1014 13:38:52.014586   15668 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 13:38:52.014713   15646 main.go:141] libmachine: (addons-313496) Setting up store path in /home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496 ...
	I1014 13:38:52.014732   15646 main.go:141] libmachine: (addons-313496) Building disk image from file:///home/jenkins/minikube-integration/19790-7836/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1014 13:38:52.014753   15646 main.go:141] libmachine: (addons-313496) Downloading /home/jenkins/minikube-integration/19790-7836/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19790-7836/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1014 13:38:52.283526   15646 main.go:141] libmachine: (addons-313496) DBG | I1014 13:38:52.283416   15668 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa...
	I1014 13:38:52.335983   15646 main.go:141] libmachine: (addons-313496) DBG | I1014 13:38:52.335860   15668 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/addons-313496.rawdisk...
	I1014 13:38:52.336011   15646 main.go:141] libmachine: (addons-313496) DBG | Writing magic tar header
	I1014 13:38:52.336025   15646 main.go:141] libmachine: (addons-313496) DBG | Writing SSH key tar header
	I1014 13:38:52.336038   15646 main.go:141] libmachine: (addons-313496) DBG | I1014 13:38:52.336009   15668 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496 ...
	I1014 13:38:52.336165   15646 main.go:141] libmachine: (addons-313496) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496
	I1014 13:38:52.336200   15646 main.go:141] libmachine: (addons-313496) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube/machines
	I1014 13:38:52.336237   15646 main.go:141] libmachine: (addons-313496) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496 (perms=drwx------)
	I1014 13:38:52.336263   15646 main.go:141] libmachine: (addons-313496) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube/machines (perms=drwxr-xr-x)
	I1014 13:38:52.336279   15646 main.go:141] libmachine: (addons-313496) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 13:38:52.336297   15646 main.go:141] libmachine: (addons-313496) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836
	I1014 13:38:52.336305   15646 main.go:141] libmachine: (addons-313496) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1014 13:38:52.336312   15646 main.go:141] libmachine: (addons-313496) DBG | Checking permissions on dir: /home/jenkins
	I1014 13:38:52.336319   15646 main.go:141] libmachine: (addons-313496) DBG | Checking permissions on dir: /home
	I1014 13:38:52.336327   15646 main.go:141] libmachine: (addons-313496) DBG | Skipping /home - not owner
	I1014 13:38:52.336344   15646 main.go:141] libmachine: (addons-313496) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube (perms=drwxr-xr-x)
	I1014 13:38:52.336365   15646 main.go:141] libmachine: (addons-313496) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836 (perms=drwxrwxr-x)
	I1014 13:38:52.336376   15646 main.go:141] libmachine: (addons-313496) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1014 13:38:52.336384   15646 main.go:141] libmachine: (addons-313496) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1014 13:38:52.336396   15646 main.go:141] libmachine: (addons-313496) Creating domain...
	I1014 13:38:52.337426   15646 main.go:141] libmachine: (addons-313496) define libvirt domain using xml: 
	I1014 13:38:52.337465   15646 main.go:141] libmachine: (addons-313496) <domain type='kvm'>
	I1014 13:38:52.337475   15646 main.go:141] libmachine: (addons-313496)   <name>addons-313496</name>
	I1014 13:38:52.337488   15646 main.go:141] libmachine: (addons-313496)   <memory unit='MiB'>4000</memory>
	I1014 13:38:52.337494   15646 main.go:141] libmachine: (addons-313496)   <vcpu>2</vcpu>
	I1014 13:38:52.337505   15646 main.go:141] libmachine: (addons-313496)   <features>
	I1014 13:38:52.337518   15646 main.go:141] libmachine: (addons-313496)     <acpi/>
	I1014 13:38:52.337526   15646 main.go:141] libmachine: (addons-313496)     <apic/>
	I1014 13:38:52.337532   15646 main.go:141] libmachine: (addons-313496)     <pae/>
	I1014 13:38:52.337539   15646 main.go:141] libmachine: (addons-313496)     
	I1014 13:38:52.337545   15646 main.go:141] libmachine: (addons-313496)   </features>
	I1014 13:38:52.337553   15646 main.go:141] libmachine: (addons-313496)   <cpu mode='host-passthrough'>
	I1014 13:38:52.337559   15646 main.go:141] libmachine: (addons-313496)   
	I1014 13:38:52.337568   15646 main.go:141] libmachine: (addons-313496)   </cpu>
	I1014 13:38:52.337574   15646 main.go:141] libmachine: (addons-313496)   <os>
	I1014 13:38:52.337587   15646 main.go:141] libmachine: (addons-313496)     <type>hvm</type>
	I1014 13:38:52.337613   15646 main.go:141] libmachine: (addons-313496)     <boot dev='cdrom'/>
	I1014 13:38:52.337634   15646 main.go:141] libmachine: (addons-313496)     <boot dev='hd'/>
	I1014 13:38:52.337653   15646 main.go:141] libmachine: (addons-313496)     <bootmenu enable='no'/>
	I1014 13:38:52.337661   15646 main.go:141] libmachine: (addons-313496)   </os>
	I1014 13:38:52.337670   15646 main.go:141] libmachine: (addons-313496)   <devices>
	I1014 13:38:52.337682   15646 main.go:141] libmachine: (addons-313496)     <disk type='file' device='cdrom'>
	I1014 13:38:52.337699   15646 main.go:141] libmachine: (addons-313496)       <source file='/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/boot2docker.iso'/>
	I1014 13:38:52.337711   15646 main.go:141] libmachine: (addons-313496)       <target dev='hdc' bus='scsi'/>
	I1014 13:38:52.337721   15646 main.go:141] libmachine: (addons-313496)       <readonly/>
	I1014 13:38:52.337731   15646 main.go:141] libmachine: (addons-313496)     </disk>
	I1014 13:38:52.337741   15646 main.go:141] libmachine: (addons-313496)     <disk type='file' device='disk'>
	I1014 13:38:52.337753   15646 main.go:141] libmachine: (addons-313496)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1014 13:38:52.337769   15646 main.go:141] libmachine: (addons-313496)       <source file='/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/addons-313496.rawdisk'/>
	I1014 13:38:52.337784   15646 main.go:141] libmachine: (addons-313496)       <target dev='hda' bus='virtio'/>
	I1014 13:38:52.337792   15646 main.go:141] libmachine: (addons-313496)     </disk>
	I1014 13:38:52.337805   15646 main.go:141] libmachine: (addons-313496)     <interface type='network'>
	I1014 13:38:52.337816   15646 main.go:141] libmachine: (addons-313496)       <source network='mk-addons-313496'/>
	I1014 13:38:52.337842   15646 main.go:141] libmachine: (addons-313496)       <model type='virtio'/>
	I1014 13:38:52.337861   15646 main.go:141] libmachine: (addons-313496)     </interface>
	I1014 13:38:52.337869   15646 main.go:141] libmachine: (addons-313496)     <interface type='network'>
	I1014 13:38:52.337874   15646 main.go:141] libmachine: (addons-313496)       <source network='default'/>
	I1014 13:38:52.337881   15646 main.go:141] libmachine: (addons-313496)       <model type='virtio'/>
	I1014 13:38:52.337884   15646 main.go:141] libmachine: (addons-313496)     </interface>
	I1014 13:38:52.337893   15646 main.go:141] libmachine: (addons-313496)     <serial type='pty'>
	I1014 13:38:52.337907   15646 main.go:141] libmachine: (addons-313496)       <target port='0'/>
	I1014 13:38:52.337919   15646 main.go:141] libmachine: (addons-313496)     </serial>
	I1014 13:38:52.337928   15646 main.go:141] libmachine: (addons-313496)     <console type='pty'>
	I1014 13:38:52.337954   15646 main.go:141] libmachine: (addons-313496)       <target type='serial' port='0'/>
	I1014 13:38:52.337963   15646 main.go:141] libmachine: (addons-313496)     </console>
	I1014 13:38:52.337969   15646 main.go:141] libmachine: (addons-313496)     <rng model='virtio'>
	I1014 13:38:52.337984   15646 main.go:141] libmachine: (addons-313496)       <backend model='random'>/dev/random</backend>
	I1014 13:38:52.338007   15646 main.go:141] libmachine: (addons-313496)     </rng>
	I1014 13:38:52.338019   15646 main.go:141] libmachine: (addons-313496)     
	I1014 13:38:52.338029   15646 main.go:141] libmachine: (addons-313496)     
	I1014 13:38:52.338047   15646 main.go:141] libmachine: (addons-313496)   </devices>
	I1014 13:38:52.338055   15646 main.go:141] libmachine: (addons-313496) </domain>
	I1014 13:38:52.338079   15646 main.go:141] libmachine: (addons-313496) 
	I1014 13:38:52.343729   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:64:3a:5d in network default
	I1014 13:38:52.344237   15646 main.go:141] libmachine: (addons-313496) Ensuring networks are active...
	I1014 13:38:52.344257   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:38:52.344885   15646 main.go:141] libmachine: (addons-313496) Ensuring network default is active
	I1014 13:38:52.345151   15646 main.go:141] libmachine: (addons-313496) Ensuring network mk-addons-313496 is active
	I1014 13:38:52.345656   15646 main.go:141] libmachine: (addons-313496) Getting domain xml...
	I1014 13:38:52.346296   15646 main.go:141] libmachine: (addons-313496) Creating domain...
	I1014 13:38:53.745757   15646 main.go:141] libmachine: (addons-313496) Waiting to get IP...
	I1014 13:38:53.746553   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:38:53.746898   15646 main.go:141] libmachine: (addons-313496) DBG | unable to find current IP address of domain addons-313496 in network mk-addons-313496
	I1014 13:38:53.746916   15646 main.go:141] libmachine: (addons-313496) DBG | I1014 13:38:53.746878   15668 retry.go:31] will retry after 215.074025ms: waiting for machine to come up
	I1014 13:38:53.963342   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:38:53.963813   15646 main.go:141] libmachine: (addons-313496) DBG | unable to find current IP address of domain addons-313496 in network mk-addons-313496
	I1014 13:38:53.963845   15646 main.go:141] libmachine: (addons-313496) DBG | I1014 13:38:53.963781   15668 retry.go:31] will retry after 295.378447ms: waiting for machine to come up
	I1014 13:38:54.260376   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:38:54.260812   15646 main.go:141] libmachine: (addons-313496) DBG | unable to find current IP address of domain addons-313496 in network mk-addons-313496
	I1014 13:38:54.260845   15646 main.go:141] libmachine: (addons-313496) DBG | I1014 13:38:54.260786   15668 retry.go:31] will retry after 389.386084ms: waiting for machine to come up
	I1014 13:38:54.651296   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:38:54.651725   15646 main.go:141] libmachine: (addons-313496) DBG | unable to find current IP address of domain addons-313496 in network mk-addons-313496
	I1014 13:38:54.651751   15646 main.go:141] libmachine: (addons-313496) DBG | I1014 13:38:54.651668   15668 retry.go:31] will retry after 440.219356ms: waiting for machine to come up
	I1014 13:38:55.093378   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:38:55.093712   15646 main.go:141] libmachine: (addons-313496) DBG | unable to find current IP address of domain addons-313496 in network mk-addons-313496
	I1014 13:38:55.093748   15646 main.go:141] libmachine: (addons-313496) DBG | I1014 13:38:55.093685   15668 retry.go:31] will retry after 607.945898ms: waiting for machine to come up
	I1014 13:38:55.703764   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:38:55.704295   15646 main.go:141] libmachine: (addons-313496) DBG | unable to find current IP address of domain addons-313496 in network mk-addons-313496
	I1014 13:38:55.704323   15646 main.go:141] libmachine: (addons-313496) DBG | I1014 13:38:55.704247   15668 retry.go:31] will retry after 629.470004ms: waiting for machine to come up
	I1014 13:38:56.335240   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:38:56.335665   15646 main.go:141] libmachine: (addons-313496) DBG | unable to find current IP address of domain addons-313496 in network mk-addons-313496
	I1014 13:38:56.335689   15646 main.go:141] libmachine: (addons-313496) DBG | I1014 13:38:56.335607   15668 retry.go:31] will retry after 1.050110581s: waiting for machine to come up
	I1014 13:38:57.387517   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:38:57.387918   15646 main.go:141] libmachine: (addons-313496) DBG | unable to find current IP address of domain addons-313496 in network mk-addons-313496
	I1014 13:38:57.387939   15646 main.go:141] libmachine: (addons-313496) DBG | I1014 13:38:57.387892   15668 retry.go:31] will retry after 1.397516625s: waiting for machine to come up
	I1014 13:38:58.787515   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:38:58.787928   15646 main.go:141] libmachine: (addons-313496) DBG | unable to find current IP address of domain addons-313496 in network mk-addons-313496
	I1014 13:38:58.787957   15646 main.go:141] libmachine: (addons-313496) DBG | I1014 13:38:58.787880   15668 retry.go:31] will retry after 1.564506642s: waiting for machine to come up
	I1014 13:39:00.353577   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:00.354008   15646 main.go:141] libmachine: (addons-313496) DBG | unable to find current IP address of domain addons-313496 in network mk-addons-313496
	I1014 13:39:00.354023   15646 main.go:141] libmachine: (addons-313496) DBG | I1014 13:39:00.353983   15668 retry.go:31] will retry after 1.737801278s: waiting for machine to come up
	I1014 13:39:02.093401   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:02.093986   15646 main.go:141] libmachine: (addons-313496) DBG | unable to find current IP address of domain addons-313496 in network mk-addons-313496
	I1014 13:39:02.094016   15646 main.go:141] libmachine: (addons-313496) DBG | I1014 13:39:02.093940   15668 retry.go:31] will retry after 2.025246342s: waiting for machine to come up
	I1014 13:39:04.122150   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:04.122572   15646 main.go:141] libmachine: (addons-313496) DBG | unable to find current IP address of domain addons-313496 in network mk-addons-313496
	I1014 13:39:04.122592   15646 main.go:141] libmachine: (addons-313496) DBG | I1014 13:39:04.122532   15668 retry.go:31] will retry after 3.273652956s: waiting for machine to come up
	I1014 13:39:07.398000   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:07.398455   15646 main.go:141] libmachine: (addons-313496) DBG | unable to find current IP address of domain addons-313496 in network mk-addons-313496
	I1014 13:39:07.398475   15646 main.go:141] libmachine: (addons-313496) DBG | I1014 13:39:07.398419   15668 retry.go:31] will retry after 4.219441467s: waiting for machine to come up
	I1014 13:39:11.619652   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:11.620095   15646 main.go:141] libmachine: (addons-313496) DBG | unable to find current IP address of domain addons-313496 in network mk-addons-313496
	I1014 13:39:11.620123   15646 main.go:141] libmachine: (addons-313496) DBG | I1014 13:39:11.620067   15668 retry.go:31] will retry after 4.721306555s: waiting for machine to come up
	I1014 13:39:16.342673   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:16.343018   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has current primary IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:16.343048   15646 main.go:141] libmachine: (addons-313496) Found IP for machine: 192.168.39.177
	I1014 13:39:16.343066   15646 main.go:141] libmachine: (addons-313496) Reserving static IP address...
	I1014 13:39:16.343413   15646 main.go:141] libmachine: (addons-313496) DBG | unable to find host DHCP lease matching {name: "addons-313496", mac: "52:54:00:12:ec:ab", ip: "192.168.39.177"} in network mk-addons-313496
	I1014 13:39:16.411345   15646 main.go:141] libmachine: (addons-313496) DBG | Getting to WaitForSSH function...
	I1014 13:39:16.411384   15646 main.go:141] libmachine: (addons-313496) Reserved static IP address: 192.168.39.177
	I1014 13:39:16.411396   15646 main.go:141] libmachine: (addons-313496) Waiting for SSH to be available...
	I1014 13:39:16.413855   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:16.414127   15646 main.go:141] libmachine: (addons-313496) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496
	I1014 13:39:16.414153   15646 main.go:141] libmachine: (addons-313496) DBG | unable to find defined IP address of network mk-addons-313496 interface with MAC address 52:54:00:12:ec:ab
	I1014 13:39:16.414317   15646 main.go:141] libmachine: (addons-313496) DBG | Using SSH client type: external
	I1014 13:39:16.414339   15646 main.go:141] libmachine: (addons-313496) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa (-rw-------)
	I1014 13:39:16.414400   15646 main.go:141] libmachine: (addons-313496) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 13:39:16.414432   15646 main.go:141] libmachine: (addons-313496) DBG | About to run SSH command:
	I1014 13:39:16.414462   15646 main.go:141] libmachine: (addons-313496) DBG | exit 0
	I1014 13:39:16.426016   15646 main.go:141] libmachine: (addons-313496) DBG | SSH cmd err, output: exit status 255: 
	I1014 13:39:16.426031   15646 main.go:141] libmachine: (addons-313496) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1014 13:39:16.426037   15646 main.go:141] libmachine: (addons-313496) DBG | command : exit 0
	I1014 13:39:16.426048   15646 main.go:141] libmachine: (addons-313496) DBG | err     : exit status 255
	I1014 13:39:16.426058   15646 main.go:141] libmachine: (addons-313496) DBG | output  : 
	I1014 13:39:19.428604   15646 main.go:141] libmachine: (addons-313496) DBG | Getting to WaitForSSH function...
	I1014 13:39:19.430754   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:19.431133   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:19.431167   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:19.431269   15646 main.go:141] libmachine: (addons-313496) DBG | Using SSH client type: external
	I1014 13:39:19.431296   15646 main.go:141] libmachine: (addons-313496) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa (-rw-------)
	I1014 13:39:19.431329   15646 main.go:141] libmachine: (addons-313496) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.177 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 13:39:19.431344   15646 main.go:141] libmachine: (addons-313496) DBG | About to run SSH command:
	I1014 13:39:19.431356   15646 main.go:141] libmachine: (addons-313496) DBG | exit 0
	I1014 13:39:19.558866   15646 main.go:141] libmachine: (addons-313496) DBG | SSH cmd err, output: <nil>: 
	I1014 13:39:19.559125   15646 main.go:141] libmachine: (addons-313496) KVM machine creation complete!
	I1014 13:39:19.559429   15646 main.go:141] libmachine: (addons-313496) Calling .GetConfigRaw
	I1014 13:39:19.559949   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:19.560110   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:19.560283   15646 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1014 13:39:19.560295   15646 main.go:141] libmachine: (addons-313496) Calling .GetState
	I1014 13:39:19.561460   15646 main.go:141] libmachine: Detecting operating system of created instance...
	I1014 13:39:19.561473   15646 main.go:141] libmachine: Waiting for SSH to be available...
	I1014 13:39:19.561478   15646 main.go:141] libmachine: Getting to WaitForSSH function...
	I1014 13:39:19.561483   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:19.563511   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:19.563806   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:19.563830   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:19.563948   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:19.564113   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:19.564250   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:19.564354   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:19.564516   15646 main.go:141] libmachine: Using SSH client type: native
	I1014 13:39:19.564703   15646 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I1014 13:39:19.564716   15646 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1014 13:39:19.670020   15646 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 13:39:19.670042   15646 main.go:141] libmachine: Detecting the provisioner...
	I1014 13:39:19.670049   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:19.672866   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:19.673182   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:19.673204   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:19.673386   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:19.673579   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:19.673765   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:19.673918   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:19.674079   15646 main.go:141] libmachine: Using SSH client type: native
	I1014 13:39:19.674229   15646 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I1014 13:39:19.674239   15646 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1014 13:39:19.783097   15646 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1014 13:39:19.783161   15646 main.go:141] libmachine: found compatible host: buildroot
	I1014 13:39:19.783173   15646 main.go:141] libmachine: Provisioning with buildroot...
	I1014 13:39:19.783183   15646 main.go:141] libmachine: (addons-313496) Calling .GetMachineName
	I1014 13:39:19.783443   15646 buildroot.go:166] provisioning hostname "addons-313496"
	I1014 13:39:19.783468   15646 main.go:141] libmachine: (addons-313496) Calling .GetMachineName
	I1014 13:39:19.783648   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:19.786172   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:19.786540   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:19.786561   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:19.786727   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:19.786891   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:19.787059   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:19.787203   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:19.787352   15646 main.go:141] libmachine: Using SSH client type: native
	I1014 13:39:19.787501   15646 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I1014 13:39:19.787512   15646 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-313496 && echo "addons-313496" | sudo tee /etc/hostname
	I1014 13:39:19.910742   15646 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-313496
	
	I1014 13:39:19.910773   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:19.913209   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:19.913514   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:19.913541   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:19.913674   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:19.913846   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:19.913982   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:19.914156   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:19.914305   15646 main.go:141] libmachine: Using SSH client type: native
	I1014 13:39:19.914460   15646 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I1014 13:39:19.914475   15646 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-313496' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-313496/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-313496' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 13:39:20.031821   15646 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 13:39:20.031848   15646 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 13:39:20.031884   15646 buildroot.go:174] setting up certificates
	I1014 13:39:20.031894   15646 provision.go:84] configureAuth start
	I1014 13:39:20.031904   15646 main.go:141] libmachine: (addons-313496) Calling .GetMachineName
	I1014 13:39:20.032129   15646 main.go:141] libmachine: (addons-313496) Calling .GetIP
	I1014 13:39:20.034872   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.035250   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:20.035277   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.035388   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:20.037455   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.037752   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:20.037786   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.037931   15646 provision.go:143] copyHostCerts
	I1014 13:39:20.037997   15646 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 13:39:20.038152   15646 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 13:39:20.038211   15646 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 13:39:20.038257   15646 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.addons-313496 san=[127.0.0.1 192.168.39.177 addons-313496 localhost minikube]
	I1014 13:39:20.196030   15646 provision.go:177] copyRemoteCerts
	I1014 13:39:20.196081   15646 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 13:39:20.196107   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:20.198559   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.198800   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:20.198825   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.199004   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:20.199166   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:20.199316   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:20.199420   15646 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa Username:docker}
	I1014 13:39:20.285289   15646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 13:39:20.313491   15646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 13:39:20.340877   15646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1014 13:39:20.368021   15646 provision.go:87] duration metric: took 336.112374ms to configureAuth
	I1014 13:39:20.368047   15646 buildroot.go:189] setting minikube options for container-runtime
	I1014 13:39:20.368244   15646 config.go:182] Loaded profile config "addons-313496": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:39:20.368324   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:20.370802   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.371140   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:20.371168   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.371306   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:20.371479   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:20.371637   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:20.371752   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:20.371896   15646 main.go:141] libmachine: Using SSH client type: native
	I1014 13:39:20.372061   15646 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I1014 13:39:20.372074   15646 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 13:39:20.598613   15646 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 13:39:20.598645   15646 main.go:141] libmachine: Checking connection to Docker...
	I1014 13:39:20.598671   15646 main.go:141] libmachine: (addons-313496) Calling .GetURL
	I1014 13:39:20.599851   15646 main.go:141] libmachine: (addons-313496) DBG | Using libvirt version 6000000
	I1014 13:39:20.601952   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.602271   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:20.602301   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.602460   15646 main.go:141] libmachine: Docker is up and running!
	I1014 13:39:20.602478   15646 main.go:141] libmachine: Reticulating splines...
	I1014 13:39:20.602486   15646 client.go:171] duration metric: took 29.134969553s to LocalClient.Create
	I1014 13:39:20.602509   15646 start.go:167] duration metric: took 29.135036656s to libmachine.API.Create "addons-313496"
	I1014 13:39:20.602519   15646 start.go:293] postStartSetup for "addons-313496" (driver="kvm2")
	I1014 13:39:20.602528   15646 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 13:39:20.602544   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:20.602776   15646 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 13:39:20.602800   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:20.604658   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.604966   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:20.604991   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.605087   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:20.605265   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:20.605445   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:20.605553   15646 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa Username:docker}
	I1014 13:39:20.688481   15646 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 13:39:20.692697   15646 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 13:39:20.692727   15646 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 13:39:20.692803   15646 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 13:39:20.692830   15646 start.go:296] duration metric: took 90.306444ms for postStartSetup
	I1014 13:39:20.692862   15646 main.go:141] libmachine: (addons-313496) Calling .GetConfigRaw
	I1014 13:39:20.693442   15646 main.go:141] libmachine: (addons-313496) Calling .GetIP
	I1014 13:39:20.695881   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.696139   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:20.696168   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.696443   15646 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/config.json ...
	I1014 13:39:20.696616   15646 start.go:128] duration metric: took 29.246557136s to createHost
	I1014 13:39:20.696638   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:20.698700   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.698996   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:20.699026   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.699192   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:20.699369   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:20.699489   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:20.699613   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:20.699745   15646 main.go:141] libmachine: Using SSH client type: native
	I1014 13:39:20.699898   15646 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.177 22 <nil> <nil>}
	I1014 13:39:20.699907   15646 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 13:39:20.807573   15646 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728913160.787783114
	
	I1014 13:39:20.807603   15646 fix.go:216] guest clock: 1728913160.787783114
	I1014 13:39:20.807614   15646 fix.go:229] Guest: 2024-10-14 13:39:20.787783114 +0000 UTC Remote: 2024-10-14 13:39:20.696625309 +0000 UTC m=+29.345353748 (delta=91.157805ms)
	I1014 13:39:20.807672   15646 fix.go:200] guest clock delta is within tolerance: 91.157805ms
	I1014 13:39:20.807682   15646 start.go:83] releasing machines lock for "addons-313496", held for 29.35768389s
	I1014 13:39:20.807709   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:20.807972   15646 main.go:141] libmachine: (addons-313496) Calling .GetIP
	I1014 13:39:20.811323   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.811742   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:20.811773   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.811978   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:20.812384   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:20.812516   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:20.812579   15646 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 13:39:20.812633   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:20.812683   15646 ssh_runner.go:195] Run: cat /version.json
	I1014 13:39:20.812702   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:20.815092   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.815186   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.815467   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:20.815491   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.815553   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:20.815590   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:20.815590   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:20.815771   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:20.815781   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:20.815923   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:20.815933   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:20.816070   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:20.816151   15646 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa Username:docker}
	I1014 13:39:20.816169   15646 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa Username:docker}
	I1014 13:39:20.919625   15646 ssh_runner.go:195] Run: systemctl --version
	I1014 13:39:20.926280   15646 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 13:39:21.088801   15646 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 13:39:21.095670   15646 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 13:39:21.095743   15646 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 13:39:21.111973   15646 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 13:39:21.112006   15646 start.go:495] detecting cgroup driver to use...
	I1014 13:39:21.112069   15646 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 13:39:21.127345   15646 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 13:39:21.140741   15646 docker.go:217] disabling cri-docker service (if available) ...
	I1014 13:39:21.140791   15646 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 13:39:21.153561   15646 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 13:39:21.167046   15646 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 13:39:21.276406   15646 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 13:39:21.441005   15646 docker.go:233] disabling docker service ...
	I1014 13:39:21.441084   15646 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 13:39:21.455334   15646 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 13:39:21.468467   15646 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 13:39:21.578055   15646 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 13:39:21.692980   15646 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 13:39:21.707977   15646 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 13:39:21.726866   15646 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 13:39:21.726927   15646 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:39:21.737978   15646 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 13:39:21.738047   15646 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:39:21.748930   15646 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:39:21.759522   15646 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:39:21.770335   15646 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 13:39:21.781479   15646 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:39:21.792499   15646 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:39:21.810247   15646 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:39:21.820885   15646 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 13:39:21.830938   15646 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 13:39:21.830989   15646 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 13:39:21.843876   15646 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 13:39:21.853716   15646 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:39:21.972678   15646 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 13:39:22.067345   15646 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 13:39:22.067431   15646 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 13:39:22.072339   15646 start.go:563] Will wait 60s for crictl version
	I1014 13:39:22.072531   15646 ssh_runner.go:195] Run: which crictl
	I1014 13:39:22.076529   15646 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 13:39:22.115507   15646 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 13:39:22.115632   15646 ssh_runner.go:195] Run: crio --version
	I1014 13:39:22.144532   15646 ssh_runner.go:195] Run: crio --version
	I1014 13:39:22.173534   15646 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1014 13:39:22.174835   15646 main.go:141] libmachine: (addons-313496) Calling .GetIP
	I1014 13:39:22.177082   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:22.177408   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:22.177427   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:22.177621   15646 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1014 13:39:22.181621   15646 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 13:39:22.193930   15646 kubeadm.go:883] updating cluster {Name:addons-313496 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:addons-313496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.177 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 13:39:22.194056   15646 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 13:39:22.194109   15646 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 13:39:22.224947   15646 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1014 13:39:22.225026   15646 ssh_runner.go:195] Run: which lz4
	I1014 13:39:22.229066   15646 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 13:39:22.233200   15646 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 13:39:22.233221   15646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1014 13:39:23.512534   15646 crio.go:462] duration metric: took 1.28349036s to copy over tarball
	I1014 13:39:23.512611   15646 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 13:39:25.711270   15646 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.198623226s)
	I1014 13:39:25.711303   15646 crio.go:469] duration metric: took 2.198741311s to extract the tarball
	I1014 13:39:25.711310   15646 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 13:39:25.747940   15646 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 13:39:25.791900   15646 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 13:39:25.791923   15646 cache_images.go:84] Images are preloaded, skipping loading
	I1014 13:39:25.791941   15646 kubeadm.go:934] updating node { 192.168.39.177 8443 v1.31.1 crio true true} ...
	I1014 13:39:25.792024   15646 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-313496 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.177
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-313496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 13:39:25.792083   15646 ssh_runner.go:195] Run: crio config
	I1014 13:39:25.844006   15646 cni.go:84] Creating CNI manager for ""
	I1014 13:39:25.844029   15646 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 13:39:25.844039   15646 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 13:39:25.844060   15646 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.177 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-313496 NodeName:addons-313496 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.177"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.177 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 13:39:25.844222   15646 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.177
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-313496"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.177"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.177"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 13:39:25.844290   15646 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 13:39:25.854212   15646 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 13:39:25.854278   15646 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 13:39:25.863717   15646 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1014 13:39:25.879968   15646 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 13:39:25.899824   15646 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1014 13:39:25.917120   15646 ssh_runner.go:195] Run: grep 192.168.39.177	control-plane.minikube.internal$ /etc/hosts
	I1014 13:39:25.921090   15646 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.177	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 13:39:25.934049   15646 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:39:26.052990   15646 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 13:39:26.069049   15646 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496 for IP: 192.168.39.177
	I1014 13:39:26.069079   15646 certs.go:194] generating shared ca certs ...
	I1014 13:39:26.069100   15646 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:26.069269   15646 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 13:39:26.255409   15646 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt ...
	I1014 13:39:26.255436   15646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt: {Name:mk6d2468f99b8c4287fe2a238d837c16037ad4fb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:26.255590   15646 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key ...
	I1014 13:39:26.255603   15646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key: {Name:mkdcb4871014a40ba9ec5ec69c1557d9dcc077f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:26.255676   15646 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 13:39:26.556583   15646 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt ...
	I1014 13:39:26.556616   15646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt: {Name:mk85c6001f322affd46dcd9480619fd86038d31e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:26.556794   15646 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key ...
	I1014 13:39:26.556806   15646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key: {Name:mk90c3216b24609d702953b1a1eea2d38998c342 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:26.556876   15646 certs.go:256] generating profile certs ...
	I1014 13:39:26.556926   15646 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.key
	I1014 13:39:26.556941   15646 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt with IP's: []
	I1014 13:39:26.768836   15646 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt ...
	I1014 13:39:26.768872   15646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: {Name:mkc64667b6f7d9ba3450cf77fbbbf751d5546cfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:26.769070   15646 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.key ...
	I1014 13:39:26.769083   15646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.key: {Name:mk18645965524d2c8fb3313f2197b04a4cf88847 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:26.769162   15646 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/apiserver.key.570273a5
	I1014 13:39:26.769183   15646 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/apiserver.crt.570273a5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.177]
	I1014 13:39:26.893389   15646 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/apiserver.crt.570273a5 ...
	I1014 13:39:26.893417   15646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/apiserver.crt.570273a5: {Name:mkf95be3ae2a42f2d8a69336c3a3c6ee5d6607f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:26.893570   15646 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/apiserver.key.570273a5 ...
	I1014 13:39:26.893582   15646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/apiserver.key.570273a5: {Name:mk6184049e18fea6750120810a3ca5a8f6fd8446 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:26.893652   15646 certs.go:381] copying /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/apiserver.crt.570273a5 -> /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/apiserver.crt
	I1014 13:39:26.893738   15646 certs.go:385] copying /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/apiserver.key.570273a5 -> /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/apiserver.key
	I1014 13:39:26.893790   15646 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/proxy-client.key
	I1014 13:39:26.893803   15646 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/proxy-client.crt with IP's: []
	I1014 13:39:27.089290   15646 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/proxy-client.crt ...
	I1014 13:39:27.089323   15646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/proxy-client.crt: {Name:mk42855ca2b5da79e664e15abbca8e866afd2d08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:27.089487   15646 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/proxy-client.key ...
	I1014 13:39:27.089500   15646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/proxy-client.key: {Name:mk34221d212b4f85855df1891610069be5307a70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:27.089680   15646 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 13:39:27.089713   15646 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 13:39:27.089737   15646 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 13:39:27.089761   15646 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 13:39:27.090299   15646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 13:39:27.118965   15646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 13:39:27.144956   15646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 13:39:27.171082   15646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 13:39:27.195761   15646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1014 13:39:27.220642   15646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 13:39:27.246982   15646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 13:39:27.273012   15646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 13:39:27.299157   15646 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 13:39:27.325164   15646 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 13:39:27.342963   15646 ssh_runner.go:195] Run: openssl version
	I1014 13:39:27.348912   15646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 13:39:27.360496   15646 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:39:27.365200   15646 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:39:27.365246   15646 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:39:27.371664   15646 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 13:39:27.383289   15646 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 13:39:27.387742   15646 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 13:39:27.387786   15646 kubeadm.go:392] StartCluster: {Name:addons-313496 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 C
lusterName:addons-313496 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.177 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 13:39:27.387853   15646 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 13:39:27.387894   15646 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 13:39:27.425497   15646 cri.go:89] found id: ""
	I1014 13:39:27.425557   15646 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 13:39:27.438159   15646 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 13:39:27.453464   15646 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 13:39:27.464089   15646 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 13:39:27.464108   15646 kubeadm.go:157] found existing configuration files:
	
	I1014 13:39:27.464150   15646 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 13:39:27.475257   15646 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 13:39:27.475320   15646 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 13:39:27.491753   15646 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 13:39:27.500825   15646 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 13:39:27.500889   15646 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 13:39:27.510158   15646 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 13:39:27.518924   15646 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 13:39:27.518968   15646 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 13:39:27.527842   15646 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 13:39:27.536439   15646 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 13:39:27.536492   15646 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 13:39:27.545597   15646 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 13:39:27.601944   15646 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1014 13:39:27.602063   15646 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 13:39:27.701322   15646 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 13:39:27.701462   15646 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 13:39:27.701613   15646 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 13:39:27.713345   15646 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 13:39:27.827892   15646 out.go:235]   - Generating certificates and keys ...
	I1014 13:39:27.828001   15646 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 13:39:27.828073   15646 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 13:39:27.946021   15646 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 13:39:28.060767   15646 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1014 13:39:28.289701   15646 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1014 13:39:28.548524   15646 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1014 13:39:28.697329   15646 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1014 13:39:28.697488   15646 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-313496 localhost] and IPs [192.168.39.177 127.0.0.1 ::1]
	I1014 13:39:28.765131   15646 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1014 13:39:28.765304   15646 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-313496 localhost] and IPs [192.168.39.177 127.0.0.1 ::1]
	I1014 13:39:29.101863   15646 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 13:39:29.551101   15646 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 13:39:29.663371   15646 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1014 13:39:29.663674   15646 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 13:39:29.865105   15646 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 13:39:29.952155   15646 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 13:39:30.044018   15646 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 13:39:30.256677   15646 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 13:39:30.338557   15646 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 13:39:30.339041   15646 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 13:39:30.341405   15646 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 13:39:30.343273   15646 out.go:235]   - Booting up control plane ...
	I1014 13:39:30.343393   15646 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 13:39:30.343512   15646 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 13:39:30.343621   15646 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 13:39:30.358628   15646 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 13:39:30.364469   15646 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 13:39:30.364536   15646 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 13:39:30.493913   15646 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 13:39:30.494048   15646 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 13:39:30.993569   15646 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.204534ms
	I1014 13:39:30.993706   15646 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1014 13:39:36.492451   15646 kubeadm.go:310] [api-check] The API server is healthy after 5.501411992s
	I1014 13:39:36.505466   15646 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 13:39:36.518910   15646 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 13:39:36.547182   15646 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 13:39:36.547439   15646 kubeadm.go:310] [mark-control-plane] Marking the node addons-313496 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 13:39:36.560761   15646 kubeadm.go:310] [bootstrap-token] Using token: eva5uq.q6cssgtl8dwhgruv
	I1014 13:39:36.562053   15646 out.go:235]   - Configuring RBAC rules ...
	I1014 13:39:36.562186   15646 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 13:39:36.578636   15646 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 13:39:36.587355   15646 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 13:39:36.591902   15646 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 13:39:36.599529   15646 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 13:39:36.605539   15646 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 13:39:36.899618   15646 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 13:39:37.328157   15646 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1014 13:39:37.898413   15646 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1014 13:39:37.899386   15646 kubeadm.go:310] 
	I1014 13:39:37.899472   15646 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1014 13:39:37.899485   15646 kubeadm.go:310] 
	I1014 13:39:37.899630   15646 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1014 13:39:37.899651   15646 kubeadm.go:310] 
	I1014 13:39:37.899704   15646 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1014 13:39:37.899762   15646 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 13:39:37.899813   15646 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 13:39:37.899821   15646 kubeadm.go:310] 
	I1014 13:39:37.899865   15646 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1014 13:39:37.899877   15646 kubeadm.go:310] 
	I1014 13:39:37.899948   15646 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 13:39:37.899958   15646 kubeadm.go:310] 
	I1014 13:39:37.900034   15646 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1014 13:39:37.900123   15646 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 13:39:37.900192   15646 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 13:39:37.900200   15646 kubeadm.go:310] 
	I1014 13:39:37.900273   15646 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 13:39:37.900345   15646 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1014 13:39:37.900351   15646 kubeadm.go:310] 
	I1014 13:39:37.900429   15646 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token eva5uq.q6cssgtl8dwhgruv \
	I1014 13:39:37.900558   15646 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 \
	I1014 13:39:37.900582   15646 kubeadm.go:310] 	--control-plane 
	I1014 13:39:37.900597   15646 kubeadm.go:310] 
	I1014 13:39:37.900677   15646 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1014 13:39:37.900687   15646 kubeadm.go:310] 
	I1014 13:39:37.900764   15646 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token eva5uq.q6cssgtl8dwhgruv \
	I1014 13:39:37.900855   15646 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 
	I1014 13:39:37.901743   15646 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 13:39:37.901832   15646 cni.go:84] Creating CNI manager for ""
	I1014 13:39:37.901849   15646 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 13:39:37.903695   15646 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 13:39:37.905081   15646 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 13:39:37.916217   15646 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 13:39:37.935796   15646 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 13:39:37.935872   15646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:37.935873   15646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-313496 minikube.k8s.io/updated_at=2024_10_14T13_39_37_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=addons-313496 minikube.k8s.io/primary=true
	I1014 13:39:38.093769   15646 ops.go:34] apiserver oom_adj: -16
	I1014 13:39:38.093895   15646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:38.594442   15646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:39.094058   15646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:39.594381   15646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:40.093992   15646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:40.593955   15646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:41.094408   15646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:41.594198   15646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:42.094364   15646 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:42.202859   15646 kubeadm.go:1113] duration metric: took 4.267047648s to wait for elevateKubeSystemPrivileges
	I1014 13:39:42.202892   15646 kubeadm.go:394] duration metric: took 14.815109732s to StartCluster
	I1014 13:39:42.202908   15646 settings.go:142] acquiring lock: {Name:mk9f6f6b9dc8c3435472077cbb9091b8e648d1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:42.203041   15646 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 13:39:42.203403   15646 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/kubeconfig: {Name:mk17dd47b52fd7a8ee17f563c35b22ef1b7788f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:42.203649   15646 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1014 13:39:42.203676   15646 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.177 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 13:39:42.203723   15646 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1014 13:39:42.203835   15646 addons.go:69] Setting yakd=true in profile "addons-313496"
	I1014 13:39:42.203851   15646 config.go:182] Loaded profile config "addons-313496": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:39:42.203859   15646 addons.go:234] Setting addon yakd=true in "addons-313496"
	I1014 13:39:42.203864   15646 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-313496"
	I1014 13:39:42.203869   15646 addons.go:69] Setting storage-provisioner=true in profile "addons-313496"
	I1014 13:39:42.203881   15646 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-313496"
	I1014 13:39:42.203849   15646 addons.go:69] Setting inspektor-gadget=true in profile "addons-313496"
	I1014 13:39:42.203885   15646 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-313496"
	I1014 13:39:42.203902   15646 addons.go:234] Setting addon inspektor-gadget=true in "addons-313496"
	I1014 13:39:42.203905   15646 addons.go:69] Setting registry=true in profile "addons-313496"
	I1014 13:39:42.203909   15646 host.go:66] Checking if "addons-313496" exists ...
	I1014 13:39:42.203912   15646 addons.go:69] Setting volcano=true in profile "addons-313496"
	I1014 13:39:42.203915   15646 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-313496"
	I1014 13:39:42.203919   15646 addons.go:234] Setting addon registry=true in "addons-313496"
	I1014 13:39:42.203926   15646 addons.go:234] Setting addon volcano=true in "addons-313496"
	I1014 13:39:42.203942   15646 addons.go:69] Setting volumesnapshots=true in profile "addons-313496"
	I1014 13:39:42.203946   15646 host.go:66] Checking if "addons-313496" exists ...
	I1014 13:39:42.203904   15646 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-313496"
	I1014 13:39:42.203952   15646 host.go:66] Checking if "addons-313496" exists ...
	I1014 13:39:42.203956   15646 addons.go:234] Setting addon volumesnapshots=true in "addons-313496"
	I1014 13:39:42.203971   15646 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-313496"
	I1014 13:39:42.203976   15646 host.go:66] Checking if "addons-313496" exists ...
	I1014 13:39:42.204309   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.204323   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.204350   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.204390   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.204406   15646 addons.go:69] Setting cloud-spanner=true in profile "addons-313496"
	I1014 13:39:42.204418   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.204425   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.204431   15646 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-313496"
	I1014 13:39:42.204450   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.204459   15646 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-313496"
	I1014 13:39:42.204467   15646 addons.go:69] Setting gcp-auth=true in profile "addons-313496"
	I1014 13:39:42.204486   15646 addons.go:69] Setting ingress=true in profile "addons-313496"
	I1014 13:39:42.204498   15646 addons.go:69] Setting default-storageclass=true in profile "addons-313496"
	I1014 13:39:42.204510   15646 addons.go:69] Setting ingress-dns=true in profile "addons-313496"
	I1014 13:39:42.204515   15646 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-313496"
	I1014 13:39:42.204520   15646 addons.go:234] Setting addon ingress-dns=true in "addons-313496"
	I1014 13:39:42.204546   15646 host.go:66] Checking if "addons-313496" exists ...
	I1014 13:39:42.204558   15646 addons.go:69] Setting metrics-server=true in profile "addons-313496"
	I1014 13:39:42.204583   15646 addons.go:234] Setting addon metrics-server=true in "addons-313496"
	I1014 13:39:42.204613   15646 host.go:66] Checking if "addons-313496" exists ...
	I1014 13:39:42.204422   15646 addons.go:234] Setting addon cloud-spanner=true in "addons-313496"
	I1014 13:39:42.204671   15646 host.go:66] Checking if "addons-313496" exists ...
	I1014 13:39:42.203948   15646 host.go:66] Checking if "addons-313496" exists ...
	I1014 13:39:42.204857   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.204882   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.204903   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.204929   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.204381   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.204952   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.204489   15646 mustload.go:65] Loading cluster: addons-313496
	I1014 13:39:42.204973   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.204490   15646 host.go:66] Checking if "addons-313496" exists ...
	I1014 13:39:42.204993   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.204501   15646 addons.go:234] Setting addon ingress=true in "addons-313496"
	I1014 13:39:42.203895   15646 host.go:66] Checking if "addons-313496" exists ...
	I1014 13:39:42.203897   15646 addons.go:234] Setting addon storage-provisioner=true in "addons-313496"
	I1014 13:39:42.203930   15646 host.go:66] Checking if "addons-313496" exists ...
	I1014 13:39:42.205157   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.205183   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.204351   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.205311   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.205334   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.205388   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.205391   15646 host.go:66] Checking if "addons-313496" exists ...
	I1014 13:39:42.205410   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.205613   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.205654   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.206750   15646 host.go:66] Checking if "addons-313496" exists ...
	I1014 13:39:42.206835   15646 out.go:177] * Verifying Kubernetes components...
	I1014 13:39:42.208771   15646 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:39:42.225995   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39055
	I1014 13:39:42.226275   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36713
	I1014 13:39:42.226405   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42089
	I1014 13:39:42.226679   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.226767   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.226902   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39259
	I1014 13:39:42.226975   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.227146   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.227156   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.227160   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.227169   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.227338   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.227524   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.227552   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.227757   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.227776   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.227777   15646 main.go:141] libmachine: (addons-313496) Calling .GetState
	I1014 13:39:42.228189   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.228235   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.228289   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.228311   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.228594   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.228874   15646 main.go:141] libmachine: (addons-313496) Calling .GetState
	I1014 13:39:42.229491   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.232902   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35653
	I1014 13:39:42.234981   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.235020   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.235079   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.235118   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.236142   15646 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-313496"
	I1014 13:39:42.236181   15646 host.go:66] Checking if "addons-313496" exists ...
	I1014 13:39:42.236430   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.236484   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.236957   15646 config.go:182] Loaded profile config "addons-313496": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:39:42.237351   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.237389   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.237907   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.237950   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.264497   15646 addons.go:234] Setting addon default-storageclass=true in "addons-313496"
	I1014 13:39:42.264561   15646 host.go:66] Checking if "addons-313496" exists ...
	I1014 13:39:42.265068   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.267185   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.267228   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.267553   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43839
	I1014 13:39:42.267885   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46467
	I1014 13:39:42.267996   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42453
	I1014 13:39:42.268462   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.268544   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.270156   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.270266   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.270348   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.270530   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.270563   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.270742   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.270760   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.271031   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.271051   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.271204   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.271429   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.271839   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.271873   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.272182   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.272200   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.272933   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.272969   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.273495   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.273501   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39887
	I1014 13:39:42.273568   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.274118   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.274221   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.274222   15646 main.go:141] libmachine: (addons-313496) Calling .GetState
	I1014 13:39:42.274315   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.274768   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.274793   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.275310   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.275961   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.276009   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.276199   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:42.278167   15646 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1014 13:39:42.279251   15646 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1014 13:39:42.279270   15646 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1014 13:39:42.279291   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:42.282850   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.283243   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:42.283268   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.283566   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:42.283739   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:42.283850   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:42.283957   15646 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa Username:docker}
	I1014 13:39:42.286562   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39943
	I1014 13:39:42.286763   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38977
	I1014 13:39:42.286813   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38829
	I1014 13:39:42.286822   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33117
	I1014 13:39:42.287261   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.287271   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.287544   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.287788   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.287809   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.288529   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45331
	I1014 13:39:42.288781   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.288924   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.288938   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.288958   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.288988   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.289024   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.289671   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.289729   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.290245   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.290684   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46139
	I1014 13:39:42.291027   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.291064   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.291096   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.291115   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.291135   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.291372   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.291612   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.291806   15646 main.go:141] libmachine: (addons-313496) Calling .GetState
	I1014 13:39:42.292356   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.292389   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.292660   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.292876   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.292906   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.293626   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.293773   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46209
	I1014 13:39:42.294431   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.294468   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.294905   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.295455   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.295472   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.295651   15646 host.go:66] Checking if "addons-313496" exists ...
	I1014 13:39:42.295857   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.296026   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.296085   15646 main.go:141] libmachine: (addons-313496) Calling .GetState
	I1014 13:39:42.296209   15646 main.go:141] libmachine: (addons-313496) Calling .GetState
	I1014 13:39:42.298007   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:42.299160   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:42.299196   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39281
	I1014 13:39:42.299161   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.299237   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.299949   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.299986   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.300807   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.300879   15646 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1014 13:39:42.301612   15646 out.go:177]   - Using image docker.io/busybox:stable
	I1014 13:39:42.306513   15646 out.go:177]   - Using image docker.io/registry:2.8.3
	I1014 13:39:42.306892   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.306914   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.307396   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.308050   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.308076   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.308925   15646 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1014 13:39:42.309138   15646 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1014 13:39:42.309153   15646 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1014 13:39:42.309182   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:42.310703   15646 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1014 13:39:42.310722   15646 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1014 13:39:42.310743   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:42.313269   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.313690   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:42.313714   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.314059   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:42.314267   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:42.314425   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:42.314577   15646 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa Username:docker}
	I1014 13:39:42.315615   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.316306   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:42.316325   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.316503   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:42.316652   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:42.316782   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:42.316890   15646 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa Username:docker}
	I1014 13:39:42.323002   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46057
	I1014 13:39:42.323563   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.324196   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.324214   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.324612   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.325187   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.325227   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.326442   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43329
	I1014 13:39:42.327489   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.327989   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.328004   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.328351   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.328497   15646 main.go:141] libmachine: (addons-313496) Calling .GetState
	I1014 13:39:42.330042   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:42.330633   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45593
	I1014 13:39:42.331146   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.331582   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.331598   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.332122   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.332468   15646 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1014 13:39:42.332696   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.332720   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.333062   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41445
	I1014 13:39:42.333224   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38645
	I1014 13:39:42.333805   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.334195   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.334425   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39819
	I1014 13:39:42.334798   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.334912   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.334940   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.334953   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.334956   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.335166   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38027
	I1014 13:39:42.335267   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.335283   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.335319   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.335495   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:42.335546   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.335675   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.335692   15646 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1014 13:39:42.336411   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.336788   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.336802   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.336857   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36123
	I1014 13:39:42.337319   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.337710   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.337982   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.337999   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.338036   15646 main.go:141] libmachine: (addons-313496) Calling .GetState
	I1014 13:39:42.338355   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.338438   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.338481   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.338573   15646 main.go:141] libmachine: (addons-313496) Calling .GetState
	I1014 13:39:42.338866   15646 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1014 13:39:42.339186   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:42.339224   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:42.341316   15646 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1014 13:39:42.341740   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:42.341742   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:42.341934   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:42.341943   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:42.343271   15646 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1014 13:39:42.343316   15646 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1014 13:39:42.343823   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:42.343862   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:42.343877   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:42.343890   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:42.343902   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:42.344615   15646 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1014 13:39:42.344632   15646 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1014 13:39:42.344649   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:42.345580   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43085
	I1014 13:39:42.345953   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:42.345966   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	W1014 13:39:42.346032   15646 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1014 13:39:42.346499   15646 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1014 13:39:42.346945   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.348191   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41387
	I1014 13:39:42.348649   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.348851   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.348864   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.349151   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.349168   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.349573   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.349734   15646 main.go:141] libmachine: (addons-313496) Calling .GetState
	I1014 13:39:42.350076   15646 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1014 13:39:42.350411   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.350647   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.350957   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:42.350975   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.351234   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:42.351385   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:42.351573   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:42.351625   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:42.351909   15646 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa Username:docker}
	I1014 13:39:42.352179   15646 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1014 13:39:42.352652   15646 main.go:141] libmachine: (addons-313496) Calling .GetState
	I1014 13:39:42.353118   15646 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1014 13:39:42.353135   15646 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1014 13:39:42.353154   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:42.353687   15646 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1014 13:39:42.354771   15646 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1014 13:39:42.354786   15646 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1014 13:39:42.354804   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:42.356301   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:42.357778   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.357803   15646 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1014 13:39:42.358217   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:42.358236   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.358453   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:42.358640   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:42.358770   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:42.358885   15646 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa Username:docker}
	I1014 13:39:42.358985   15646 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1014 13:39:42.358996   15646 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1014 13:39:42.359012   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:42.359675   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39315
	I1014 13:39:42.359819   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.360128   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:42.360154   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.360163   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.360349   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:42.360649   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:42.360815   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:42.360925   15646 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa Username:docker}
	I1014 13:39:42.361270   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.361283   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.361587   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.361741   15646 main.go:141] libmachine: (addons-313496) Calling .GetState
	I1014 13:39:42.362145   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.362944   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:42.363810   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:42.363822   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.363841   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46577
	I1014 13:39:42.363990   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:42.364173   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:42.364224   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.364316   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:42.364373   15646 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1014 13:39:42.364597   15646 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa Username:docker}
	I1014 13:39:42.364666   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.364676   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.365638   15646 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1014 13:39:42.365655   15646 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1014 13:39:42.365670   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:42.365845   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40461
	I1014 13:39:42.365928   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.365995   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34805
	I1014 13:39:42.366109   15646 main.go:141] libmachine: (addons-313496) Calling .GetState
	I1014 13:39:42.366365   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.366796   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.366854   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.367045   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.367284   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.367547   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.367560   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.367609   15646 main.go:141] libmachine: (addons-313496) Calling .GetState
	I1014 13:39:42.367778   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37369
	I1014 13:39:42.367977   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.368213   15646 main.go:141] libmachine: (addons-313496) Calling .GetState
	I1014 13:39:42.368366   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.368894   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.368911   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.368967   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.369276   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.369431   15646 main.go:141] libmachine: (addons-313496) Calling .GetState
	I1014 13:39:42.369580   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:42.369773   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:42.369959   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:42.369990   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.370090   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:42.370218   15646 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa Username:docker}
	I1014 13:39:42.371237   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:42.371253   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:42.371259   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42769
	I1014 13:39:42.371311   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:42.371375   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:42.371695   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.372097   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.372116   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.372869   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.372999   15646 main.go:141] libmachine: (addons-313496) Calling .GetState
	I1014 13:39:42.373869   15646 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1014 13:39:42.373883   15646 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 13:39:42.373883   15646 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1014 13:39:42.374346   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:42.373947   15646 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1014 13:39:42.375479   15646 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 13:39:42.375500   15646 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 13:39:42.375517   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:42.376015   15646 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1014 13:39:42.376024   15646 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1014 13:39:42.376101   15646 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1014 13:39:42.376118   15646 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1014 13:39:42.376134   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:42.376620   15646 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1014 13:39:42.376640   15646 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1014 13:39:42.376656   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:42.376887   15646 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1014 13:39:42.376898   15646 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1014 13:39:42.376937   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:42.377939   15646 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1014 13:39:42.379302   15646 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1014 13:39:42.379322   15646 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1014 13:39:42.379329   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.379350   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:42.380269   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:42.380300   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.380572   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:42.380754   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:42.380889   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:42.381203   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.381531   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:42.381560   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.381634   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.381664   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:42.381862   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:42.381981   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:42.382079   15646 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa Username:docker}
	I1014 13:39:42.382782   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.382933   15646 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa Username:docker}
	I1014 13:39:42.383154   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:42.383354   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:42.383498   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:42.383614   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:42.383922   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.384284   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.384333   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.384368   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:42.384440   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.384400   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:42.384409   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:42.384415   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:42.384680   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:42.384753   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:42.384755   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:42.384876   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:42.384909   15646 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa Username:docker}
	I1014 13:39:42.384927   15646 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa Username:docker}
	I1014 13:39:42.385189   15646 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa Username:docker}
	I1014 13:39:42.387638   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34495
	I1014 13:39:42.388038   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:42.388454   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:42.388472   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:42.388758   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:42.388883   15646 main.go:141] libmachine: (addons-313496) Calling .GetState
	I1014 13:39:42.390116   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:42.390389   15646 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 13:39:42.390403   15646 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 13:39:42.390418   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:42.392871   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.393174   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:42.393209   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:42.393313   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:42.393455   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:42.393565   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:42.393677   15646 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa Username:docker}
	I1014 13:39:42.684283   15646 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1014 13:39:42.684316   15646 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1014 13:39:42.693645   15646 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 13:39:42.693913   15646 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1014 13:39:42.766772   15646 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1014 13:39:42.766809   15646 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1014 13:39:42.802942   15646 node_ready.go:35] waiting up to 6m0s for node "addons-313496" to be "Ready" ...
	I1014 13:39:42.808996   15646 node_ready.go:49] node "addons-313496" has status "Ready":"True"
	I1014 13:39:42.809022   15646 node_ready.go:38] duration metric: took 6.048354ms for node "addons-313496" to be "Ready" ...
	I1014 13:39:42.809034   15646 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 13:39:42.823858   15646 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1014 13:39:42.835590   15646 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-69r77" in "kube-system" namespace to be "Ready" ...
	I1014 13:39:42.887640   15646 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 13:39:42.955240   15646 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 13:39:42.956393   15646 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1014 13:39:42.970325   15646 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1014 13:39:42.970352   15646 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1014 13:39:42.993800   15646 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1014 13:39:42.993833   15646 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1014 13:39:42.993905   15646 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1014 13:39:42.995375   15646 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1014 13:39:43.016630   15646 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1014 13:39:43.016655   15646 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1014 13:39:43.021758   15646 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1014 13:39:43.021786   15646 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1014 13:39:43.032306   15646 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1014 13:39:43.039172   15646 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1014 13:39:43.039199   15646 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1014 13:39:43.048622   15646 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1014 13:39:43.078447   15646 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1014 13:39:43.078472   15646 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1014 13:39:43.104107   15646 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1014 13:39:43.104132   15646 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1014 13:39:43.136090   15646 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1014 13:39:43.136119   15646 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1014 13:39:43.246085   15646 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1014 13:39:43.246111   15646 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1014 13:39:43.252548   15646 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 13:39:43.263816   15646 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1014 13:39:43.263835   15646 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1014 13:39:43.347955   15646 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1014 13:39:43.394250   15646 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1014 13:39:43.394280   15646 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1014 13:39:43.458649   15646 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1014 13:39:43.458674   15646 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1014 13:39:43.460932   15646 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 13:39:43.460945   15646 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1014 13:39:43.549628   15646 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1014 13:39:43.549651   15646 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1014 13:39:43.649470   15646 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1014 13:39:43.649494   15646 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1014 13:39:43.703258   15646 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1014 13:39:43.703279   15646 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1014 13:39:43.723388   15646 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 13:39:43.795726   15646 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1014 13:39:43.795756   15646 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1014 13:39:43.865182   15646 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1014 13:39:43.897163   15646 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1014 13:39:44.074622   15646 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1014 13:39:44.074650   15646 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1014 13:39:44.304178   15646 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1014 13:39:44.304205   15646 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1014 13:39:44.545782   15646 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1014 13:39:44.545807   15646 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1014 13:39:44.841969   15646 pod_ready.go:103] pod "coredns-7c65d6cfc9-69r77" in "kube-system" namespace has status "Ready":"False"
	I1014 13:39:44.882655   15646 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.188706042s)
	I1014 13:39:44.882693   15646 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1014 13:39:45.038028   15646 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1014 13:39:45.038049   15646 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1014 13:39:45.386815   15646 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-313496" context rescaled to 1 replicas
	I1014 13:39:45.390491   15646 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1014 13:39:45.390510   15646 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1014 13:39:45.592885   15646 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.768989319s)
	I1014 13:39:45.592929   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:45.592940   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:45.592945   15646 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.705272099s)
	I1014 13:39:45.592985   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:45.593000   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:45.593237   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:45.593254   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:45.593263   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:45.593270   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:45.593289   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:45.593301   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:45.593310   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:45.593317   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:45.593596   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:45.593611   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:45.593625   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:45.593636   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:45.593641   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:45.593652   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:45.618234   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:45.618254   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:45.618519   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:45.618559   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:45.618568   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:45.651367   15646 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1014 13:39:45.651396   15646 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1014 13:39:46.001746   15646 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1014 13:39:46.843144   15646 pod_ready.go:103] pod "coredns-7c65d6cfc9-69r77" in "kube-system" namespace has status "Ready":"False"
	I1014 13:39:47.514577   15646 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.55929112s)
	I1014 13:39:47.514659   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:47.514672   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:47.514945   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:47.514955   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:47.514965   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:47.514974   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:47.514981   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:47.515195   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:47.515207   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:47.933264   15646 pod_ready.go:93] pod "coredns-7c65d6cfc9-69r77" in "kube-system" namespace has status "Ready":"True"
	I1014 13:39:47.933286   15646 pod_ready.go:82] duration metric: took 5.097659847s for pod "coredns-7c65d6cfc9-69r77" in "kube-system" namespace to be "Ready" ...
	I1014 13:39:47.933297   15646 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gmrsw" in "kube-system" namespace to be "Ready" ...
	I1014 13:39:48.952604   15646 pod_ready.go:93] pod "coredns-7c65d6cfc9-gmrsw" in "kube-system" namespace has status "Ready":"True"
	I1014 13:39:48.952626   15646 pod_ready.go:82] duration metric: took 1.019321331s for pod "coredns-7c65d6cfc9-gmrsw" in "kube-system" namespace to be "Ready" ...
	I1014 13:39:48.952635   15646 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-313496" in "kube-system" namespace to be "Ready" ...
	I1014 13:39:48.967174   15646 pod_ready.go:93] pod "etcd-addons-313496" in "kube-system" namespace has status "Ready":"True"
	I1014 13:39:48.967195   15646 pod_ready.go:82] duration metric: took 14.554496ms for pod "etcd-addons-313496" in "kube-system" namespace to be "Ready" ...
	I1014 13:39:48.967204   15646 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-313496" in "kube-system" namespace to be "Ready" ...
	I1014 13:39:49.360379   15646 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1014 13:39:49.360421   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:49.363514   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:49.363880   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:49.363921   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:49.364125   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:49.364299   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:49.364464   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:49.364578   15646 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa Username:docker}
	I1014 13:39:49.826808   15646 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1014 13:39:50.004828   15646 pod_ready.go:93] pod "kube-apiserver-addons-313496" in "kube-system" namespace has status "Ready":"True"
	I1014 13:39:50.004851   15646 pod_ready.go:82] duration metric: took 1.037640433s for pod "kube-apiserver-addons-313496" in "kube-system" namespace to be "Ready" ...
	I1014 13:39:50.004861   15646 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-313496" in "kube-system" namespace to be "Ready" ...
	I1014 13:39:50.013782   15646 pod_ready.go:93] pod "kube-controller-manager-addons-313496" in "kube-system" namespace has status "Ready":"True"
	I1014 13:39:50.013803   15646 pod_ready.go:82] duration metric: took 8.935744ms for pod "kube-controller-manager-addons-313496" in "kube-system" namespace to be "Ready" ...
	I1014 13:39:50.013813   15646 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7zvnt" in "kube-system" namespace to be "Ready" ...
	I1014 13:39:50.030942   15646 pod_ready.go:93] pod "kube-proxy-7zvnt" in "kube-system" namespace has status "Ready":"True"
	I1014 13:39:50.030963   15646 pod_ready.go:82] duration metric: took 17.143392ms for pod "kube-proxy-7zvnt" in "kube-system" namespace to be "Ready" ...
	I1014 13:39:50.030972   15646 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-313496" in "kube-system" namespace to be "Ready" ...
	I1014 13:39:50.072642   15646 addons.go:234] Setting addon gcp-auth=true in "addons-313496"
	I1014 13:39:50.072693   15646 host.go:66] Checking if "addons-313496" exists ...
	I1014 13:39:50.073063   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:50.073106   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:50.088039   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43183
	I1014 13:39:50.088975   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:50.089537   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:50.089558   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:50.089869   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:50.090391   15646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:39:50.090421   15646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:39:50.105781   15646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38569
	I1014 13:39:50.106309   15646 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:39:50.106798   15646 main.go:141] libmachine: Using API Version  1
	I1014 13:39:50.106820   15646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:39:50.107193   15646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:39:50.107380   15646 main.go:141] libmachine: (addons-313496) Calling .GetState
	I1014 13:39:50.108955   15646 main.go:141] libmachine: (addons-313496) Calling .DriverName
	I1014 13:39:50.109147   15646 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1014 13:39:50.109172   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHHostname
	I1014 13:39:50.111728   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:50.112149   15646 main.go:141] libmachine: (addons-313496) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:12:ec:ab", ip: ""} in network mk-addons-313496: {Iface:virbr1 ExpiryTime:2024-10-14 14:39:07 +0000 UTC Type:0 Mac:52:54:00:12:ec:ab Iaid: IPaddr:192.168.39.177 Prefix:24 Hostname:addons-313496 Clientid:01:52:54:00:12:ec:ab}
	I1014 13:39:50.112178   15646 main.go:141] libmachine: (addons-313496) DBG | domain addons-313496 has defined IP address 192.168.39.177 and MAC address 52:54:00:12:ec:ab in network mk-addons-313496
	I1014 13:39:50.112325   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHPort
	I1014 13:39:50.112483   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHKeyPath
	I1014 13:39:50.112625   15646 main.go:141] libmachine: (addons-313496) Calling .GetSSHUsername
	I1014 13:39:50.112732   15646 sshutil.go:53] new ssh client: &{IP:192.168.39.177 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/addons-313496/id_rsa Username:docker}
	I1014 13:39:50.244152   15646 pod_ready.go:93] pod "kube-scheduler-addons-313496" in "kube-system" namespace has status "Ready":"True"
	I1014 13:39:50.244175   15646 pod_ready.go:82] duration metric: took 213.196586ms for pod "kube-scheduler-addons-313496" in "kube-system" namespace to be "Ready" ...
	I1014 13:39:50.244185   15646 pod_ready.go:39] duration metric: took 7.435139141s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 13:39:50.244201   15646 api_server.go:52] waiting for apiserver process to appear ...
	I1014 13:39:50.244266   15646 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 13:39:50.966025   15646 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.009596819s)
	I1014 13:39:50.966069   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:50.966078   15646 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.972147576s)
	I1014 13:39:50.966097   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:50.966080   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:50.966111   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:50.966182   15646 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.970773376s)
	I1014 13:39:50.966202   15646 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (7.933872301s)
	I1014 13:39:50.966218   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:50.966227   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:50.966230   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:50.966236   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:50.966523   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:50.966555   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:50.966581   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:50.966611   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:50.966610   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:50.966618   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:50.966578   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:50.966630   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:50.966638   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:50.966641   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:50.966621   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:50.966639   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:50.966665   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:50.966668   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:50.966673   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:50.966714   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:50.966725   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:50.966731   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:50.966732   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:50.966976   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:50.967005   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:50.967013   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:50.967059   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:50.967080   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:50.967087   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:50.968378   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:50.968397   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:50.968622   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:50.968654   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:50.968672   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:50.968685   15646 addons.go:475] Verifying addon ingress=true in "addons-313496"
	I1014 13:39:50.969687   15646 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (7.921036739s)
	I1014 13:39:50.969720   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:50.969730   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:50.969778   15646 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (7.717202406s)
	I1014 13:39:50.969810   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:50.969820   15646 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.621838251s)
	I1014 13:39:50.969835   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:50.969845   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:50.969821   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:50.969893   15646 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.246477089s)
	I1014 13:39:50.969909   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:50.969923   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:50.969924   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:50.969926   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:50.969932   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:50.969940   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:50.969946   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:50.969984   15646 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.10477213s)
	I1014 13:39:50.970009   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:50.970022   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:50.970067   15646 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.072874539s)
	I1014 13:39:50.970095   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:50.970108   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	W1014 13:39:50.970101   15646 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1014 13:39:50.970117   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:50.970124   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:50.970124   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:50.970129   15646 retry.go:31] will retry after 197.386027ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1014 13:39:50.970270   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:50.970279   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:50.970371   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:50.970383   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:50.970383   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:50.970391   15646 addons.go:475] Verifying addon registry=true in "addons-313496"
	I1014 13:39:50.970409   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:50.970419   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:50.970426   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:50.970433   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:50.970383   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:50.970627   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:50.970641   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:50.970661   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:50.970671   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:50.970895   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:50.970910   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:50.970959   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:50.970968   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:50.970977   15646 addons.go:475] Verifying addon metrics-server=true in "addons-313496"
	I1014 13:39:50.971132   15646 out.go:177] * Verifying ingress addon...
	I1014 13:39:50.972101   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:50.972137   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:50.972145   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:50.972152   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:50.972159   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:50.972225   15646 out.go:177] * Verifying registry addon...
	I1014 13:39:50.973136   15646 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-313496 service yakd-dashboard -n yakd-dashboard
	
	I1014 13:39:50.974069   15646 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1014 13:39:50.974965   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:50.974977   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:50.974991   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:50.975001   15646 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1014 13:39:50.999401   15646 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1014 13:39:50.999423   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:50.999519   15646 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1014 13:39:50.999537   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:39:51.058405   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:51.058426   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:51.058692   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:51.058711   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:51.168271   15646 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1014 13:39:51.544865   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:51.545177   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:39:51.990522   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:39:51.990613   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:52.484812   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:39:52.485488   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:52.991292   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:52.991835   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:39:53.471206   15646 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.226915486s)
	I1014 13:39:53.471252   15646 api_server.go:72] duration metric: took 11.26754305s to wait for apiserver process to appear ...
	I1014 13:39:53.471260   15646 api_server.go:88] waiting for apiserver healthz status ...
	I1014 13:39:53.471281   15646 api_server.go:253] Checking apiserver healthz at https://192.168.39.177:8443/healthz ...
	I1014 13:39:53.471281   15646 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.362116774s)
	I1014 13:39:53.471206   15646 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.469401713s)
	I1014 13:39:53.471393   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:53.471417   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:53.471432   15646 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.303118957s)
	I1014 13:39:53.471465   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:53.471481   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:53.471743   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:53.471757   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:53.471765   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:53.471771   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:53.472071   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:53.472126   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:53.472144   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:53.472159   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:53.472169   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:53.472272   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:53.472284   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:53.472296   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:53.472327   15646 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-313496"
	I1014 13:39:53.472426   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:53.472604   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:53.472453   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:53.473181   15646 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1014 13:39:53.474089   15646 out.go:177] * Verifying csi-hostpath-driver addon...
	I1014 13:39:53.475706   15646 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1014 13:39:53.476497   15646 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1014 13:39:53.477085   15646 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1014 13:39:53.477106   15646 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1014 13:39:53.483784   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:53.483954   15646 api_server.go:279] https://192.168.39.177:8443/healthz returned 200:
	ok
	I1014 13:39:53.484115   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:39:53.485601   15646 api_server.go:141] control plane version: v1.31.1
	I1014 13:39:53.485620   15646 api_server.go:131] duration metric: took 14.353511ms to wait for apiserver health ...
	I1014 13:39:53.485628   15646 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 13:39:53.498326   15646 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1014 13:39:53.498347   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:39:53.514436   15646 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1014 13:39:53.514462   15646 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1014 13:39:53.532248   15646 system_pods.go:59] 19 kube-system pods found
	I1014 13:39:53.532289   15646 system_pods.go:61] "amd-gpu-device-plugin-m9mtz" [2fc02ee9-2529-4893-abc3-e638a461db45] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1014 13:39:53.532298   15646 system_pods.go:61] "coredns-7c65d6cfc9-69r77" [1c55ebf0-8189-43c8-b05c-375564deee96] Running
	I1014 13:39:53.532305   15646 system_pods.go:61] "coredns-7c65d6cfc9-gmrsw" [bb4aafb5-707d-46b8-8f09-da731dd7b975] Running
	I1014 13:39:53.532312   15646 system_pods.go:61] "csi-hostpath-attacher-0" [35914d73-1e05-4cb8-a4a9-ef439861030f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1014 13:39:53.532318   15646 system_pods.go:61] "csi-hostpath-resizer-0" [d664c078-5d63-4a85-af0e-797d001ec728] Pending
	I1014 13:39:53.532334   15646 system_pods.go:61] "csi-hostpathplugin-vcsrg" [f0796f57-a38e-4662-b0db-f8717051d902] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1014 13:39:53.532343   15646 system_pods.go:61] "etcd-addons-313496" [7f91653e-02a7-4c1b-9e71-445271163d23] Running
	I1014 13:39:53.532349   15646 system_pods.go:61] "kube-apiserver-addons-313496" [4d56adc4-d1cd-4c02-9cc3-92236aaeb40a] Running
	I1014 13:39:53.532355   15646 system_pods.go:61] "kube-controller-manager-addons-313496" [584d1c59-ade1-4c41-96fe-8d7b394b06f3] Running
	I1014 13:39:53.532364   15646 system_pods.go:61] "kube-ingress-dns-minikube" [664164ae-6d4b-47d0-8091-c4a9ae18ae9a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1014 13:39:53.532372   15646 system_pods.go:61] "kube-proxy-7zvnt" [357a51d7-a6c0-4616-aef2-fe9c7074e51d] Running
	I1014 13:39:53.532379   15646 system_pods.go:61] "kube-scheduler-addons-313496" [ec2ff7d8-274f-469f-a656-1f1267296410] Running
	I1014 13:39:53.532388   15646 system_pods.go:61] "metrics-server-84c5f94fbc-cggcl" [33ed4d65-0bcf-4a12-beaf-298d4c5f2714] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 13:39:53.532400   15646 system_pods.go:61] "nvidia-device-plugin-daemonset-kkmfm" [846014ef-c2c5-47a1-b0ae-3e582a248ee6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1014 13:39:53.532412   15646 system_pods.go:61] "registry-66c9cd494c-kxfcz" [a4d53217-34bc-44bb-8e30-d6b8914b6825] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1014 13:39:53.532424   15646 system_pods.go:61] "registry-proxy-xsptb" [ed9b7051-496c-4b26-be7b-c8c2afd04b8e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1014 13:39:53.532436   15646 system_pods.go:61] "snapshot-controller-56fcc65765-ttgh7" [c91f9671-b7dc-43c9-b0f2-347714aec2ba] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1014 13:39:53.532450   15646 system_pods.go:61] "snapshot-controller-56fcc65765-vvqh6" [ee08ee62-c76c-4fcf-947e-9dd882c3e072] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1014 13:39:53.532458   15646 system_pods.go:61] "storage-provisioner" [3ad1bb99-d287-4642-957b-3d383adfa12a] Running
	I1014 13:39:53.532467   15646 system_pods.go:74] duration metric: took 46.83231ms to wait for pod list to return data ...
	I1014 13:39:53.532478   15646 default_sa.go:34] waiting for default service account to be created ...
	I1014 13:39:53.542073   15646 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1014 13:39:53.542104   15646 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1014 13:39:53.551633   15646 default_sa.go:45] found service account: "default"
	I1014 13:39:53.551659   15646 default_sa.go:55] duration metric: took 19.17261ms for default service account to be created ...
	I1014 13:39:53.551670   15646 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 13:39:53.576850   15646 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1014 13:39:53.620425   15646 system_pods.go:86] 19 kube-system pods found
	I1014 13:39:53.620472   15646 system_pods.go:89] "amd-gpu-device-plugin-m9mtz" [2fc02ee9-2529-4893-abc3-e638a461db45] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1014 13:39:53.620481   15646 system_pods.go:89] "coredns-7c65d6cfc9-69r77" [1c55ebf0-8189-43c8-b05c-375564deee96] Running
	I1014 13:39:53.620489   15646 system_pods.go:89] "coredns-7c65d6cfc9-gmrsw" [bb4aafb5-707d-46b8-8f09-da731dd7b975] Running
	I1014 13:39:53.620498   15646 system_pods.go:89] "csi-hostpath-attacher-0" [35914d73-1e05-4cb8-a4a9-ef439861030f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1014 13:39:53.620507   15646 system_pods.go:89] "csi-hostpath-resizer-0" [d664c078-5d63-4a85-af0e-797d001ec728] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1014 13:39:53.620516   15646 system_pods.go:89] "csi-hostpathplugin-vcsrg" [f0796f57-a38e-4662-b0db-f8717051d902] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1014 13:39:53.620524   15646 system_pods.go:89] "etcd-addons-313496" [7f91653e-02a7-4c1b-9e71-445271163d23] Running
	I1014 13:39:53.620532   15646 system_pods.go:89] "kube-apiserver-addons-313496" [4d56adc4-d1cd-4c02-9cc3-92236aaeb40a] Running
	I1014 13:39:53.620538   15646 system_pods.go:89] "kube-controller-manager-addons-313496" [584d1c59-ade1-4c41-96fe-8d7b394b06f3] Running
	I1014 13:39:53.620548   15646 system_pods.go:89] "kube-ingress-dns-minikube" [664164ae-6d4b-47d0-8091-c4a9ae18ae9a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1014 13:39:53.620557   15646 system_pods.go:89] "kube-proxy-7zvnt" [357a51d7-a6c0-4616-aef2-fe9c7074e51d] Running
	I1014 13:39:53.620563   15646 system_pods.go:89] "kube-scheduler-addons-313496" [ec2ff7d8-274f-469f-a656-1f1267296410] Running
	I1014 13:39:53.620570   15646 system_pods.go:89] "metrics-server-84c5f94fbc-cggcl" [33ed4d65-0bcf-4a12-beaf-298d4c5f2714] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 13:39:53.620579   15646 system_pods.go:89] "nvidia-device-plugin-daemonset-kkmfm" [846014ef-c2c5-47a1-b0ae-3e582a248ee6] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1014 13:39:53.620588   15646 system_pods.go:89] "registry-66c9cd494c-kxfcz" [a4d53217-34bc-44bb-8e30-d6b8914b6825] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1014 13:39:53.620601   15646 system_pods.go:89] "registry-proxy-xsptb" [ed9b7051-496c-4b26-be7b-c8c2afd04b8e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1014 13:39:53.620610   15646 system_pods.go:89] "snapshot-controller-56fcc65765-ttgh7" [c91f9671-b7dc-43c9-b0f2-347714aec2ba] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1014 13:39:53.620621   15646 system_pods.go:89] "snapshot-controller-56fcc65765-vvqh6" [ee08ee62-c76c-4fcf-947e-9dd882c3e072] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1014 13:39:53.620627   15646 system_pods.go:89] "storage-provisioner" [3ad1bb99-d287-4642-957b-3d383adfa12a] Running
	I1014 13:39:53.620638   15646 system_pods.go:126] duration metric: took 68.960575ms to wait for k8s-apps to be running ...
	I1014 13:39:53.620647   15646 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 13:39:53.620703   15646 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 13:39:53.981738   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:39:53.981838   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:53.983676   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:39:54.483575   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:54.483775   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:39:54.484122   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:39:54.702030   15646 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.125141186s)
	I1014 13:39:54.702083   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:54.702098   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:54.702100   15646 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.08136487s)
	I1014 13:39:54.702126   15646 system_svc.go:56] duration metric: took 1.081477085s WaitForService to wait for kubelet
	I1014 13:39:54.702137   15646 kubeadm.go:582] duration metric: took 12.498427406s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 13:39:54.702163   15646 node_conditions.go:102] verifying NodePressure condition ...
	I1014 13:39:54.702356   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:54.702368   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:54.702370   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:54.702387   15646 main.go:141] libmachine: Making call to close driver server
	I1014 13:39:54.702395   15646 main.go:141] libmachine: (addons-313496) Calling .Close
	I1014 13:39:54.702641   15646 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:39:54.702654   15646 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:39:54.702669   15646 main.go:141] libmachine: (addons-313496) DBG | Closing plugin on server side
	I1014 13:39:54.703591   15646 addons.go:475] Verifying addon gcp-auth=true in "addons-313496"
	I1014 13:39:54.705250   15646 out.go:177] * Verifying gcp-auth addon...
	I1014 13:39:54.707632   15646 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1014 13:39:54.742119   15646 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 13:39:54.742152   15646 node_conditions.go:123] node cpu capacity is 2
	I1014 13:39:54.742167   15646 node_conditions.go:105] duration metric: took 39.99816ms to run NodePressure ...
	I1014 13:39:54.742180   15646 start.go:241] waiting for startup goroutines ...
	I1014 13:39:54.742390   15646 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1014 13:39:54.742404   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:39:54.978423   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:39:54.981546   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:54.982442   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:39:55.211567   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:39:55.484041   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:55.484470   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:39:55.484942   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:39:55.711451   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:39:55.980783   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:39:55.982718   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:39:55.984095   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:56.213871   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:39:56.480283   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:56.480461   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:39:56.482467   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:39:56.711098   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:39:56.978824   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:56.979961   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:39:56.981798   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:39:57.211400   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:39:57.482131   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:39:57.482181   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:57.482408   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:39:57.711282   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:39:57.979255   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:57.979632   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:39:57.982799   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:39:58.211166   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:39:58.479399   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:58.479726   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:39:58.481357   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:39:58.712157   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:39:58.978630   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:58.979112   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:39:58.981473   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:39:59.212550   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:39:59.479071   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:59.479307   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:39:59.481246   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:39:59.712227   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:39:59.978336   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:39:59.978813   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:39:59.983506   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:00.211109   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:00.479045   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:00.480714   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:00.481796   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:00.711396   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:00.978616   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:00.982175   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:00.982948   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:01.211856   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:01.477964   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:01.478717   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:01.481535   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:01.712287   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:01.979651   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:01.980922   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:01.982427   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:02.211414   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:02.479511   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:02.479842   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:02.481526   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:02.712618   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:02.979311   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:02.979505   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:02.982045   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:03.212265   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:03.482340   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:03.482579   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:03.483854   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:03.712039   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:03.979278   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:03.979581   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:03.981210   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:04.212031   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:04.479908   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:04.479929   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:04.483325   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:04.711575   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:04.979051   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:04.979333   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:04.980786   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:05.212262   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:05.479465   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:05.479936   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:05.481997   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:05.711968   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:05.981135   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:05.981991   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:05.983683   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:06.210972   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:06.481888   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:06.482029   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:06.482678   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:06.711289   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:06.978513   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:06.978765   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:06.980777   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:07.211773   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:07.478715   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:07.479490   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:07.484437   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:07.711453   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:07.979797   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:07.980014   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:07.990967   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:08.211779   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:08.478301   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:08.478976   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:08.481446   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:08.711861   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:08.979533   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:08.980156   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:08.981391   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:09.210791   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:09.479829   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:09.480292   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:09.481876   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:09.861507   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:09.978049   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:09.980513   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:09.980811   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:10.210799   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:10.479529   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:10.480334   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:10.481457   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:10.711355   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:10.979513   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:10.979946   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:10.982322   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:11.210957   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:11.478763   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:11.480690   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:11.481093   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:11.712196   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:11.986955   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:11.987208   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:11.987905   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:12.213203   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:12.479266   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:12.480295   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:12.486884   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:12.712005   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:12.979164   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:12.979782   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:12.981661   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:13.211244   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:13.478327   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:13.478883   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:13.481238   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:13.711953   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:13.980334   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:13.980478   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:13.981305   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:14.211881   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:14.480982   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:14.484294   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:14.485875   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:14.711575   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:14.981397   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:14.984371   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:14.987733   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:15.212040   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:15.485425   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:15.495512   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:15.499103   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:15.712941   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:15.985560   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:15.988037   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:15.988156   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:16.211959   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:16.481669   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:16.483883   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:16.583899   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:16.710964   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:16.978265   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:16.980151   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:16.981729   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:17.211565   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:17.958714   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:17.958864   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:17.959294   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:17.959882   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:18.056188   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:18.056349   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:18.056846   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:18.211840   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:18.486040   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:18.486286   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:18.487008   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:18.711114   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:18.981669   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:18.981780   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:18.982612   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:19.213712   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:19.478887   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:19.479470   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:19.482883   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:19.711964   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:19.979867   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:19.980352   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:19.989632   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:20.212798   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:20.479605   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:20.479871   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:20.481595   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:20.714281   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:20.978378   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:20.979875   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:20.981567   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:21.211517   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:21.481544   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:21.481866   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:21.482687   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:21.711433   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:21.978543   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:21.979945   15646 kapi.go:107] duration metric: took 31.004942682s to wait for kubernetes.io/minikube-addons=registry ...
	I1014 13:40:21.981932   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:22.211700   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:22.478756   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:22.481032   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:22.712478   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:22.978868   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:22.982514   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:23.210834   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:23.478903   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:23.481588   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:23.711977   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:23.980542   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:23.981635   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:24.441967   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:24.478555   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:24.481368   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:24.712090   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:24.979688   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:24.982005   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:25.212156   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:25.480784   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:25.482396   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:25.712843   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:25.979042   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:25.982231   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:26.212266   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:26.479803   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:26.482285   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:26.724256   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:26.978709   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:26.980967   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:27.211848   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:27.479178   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:27.482293   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:27.711702   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:27.979465   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:27.981314   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:28.211643   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:28.485129   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:28.485192   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:28.711934   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:28.981378   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:28.981648   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:29.211856   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:29.479554   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:29.482010   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:29.711786   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:29.979279   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:29.981418   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:30.211000   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:30.893295   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:30.894769   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:30.894901   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:30.984114   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:30.987110   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:31.210979   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:31.478624   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:31.481528   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:31.712617   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:31.979380   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:31.984180   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:32.213603   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:32.479081   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:32.481924   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:32.715103   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:32.985010   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:32.986128   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:33.212131   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:33.478316   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:33.480866   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:33.718248   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:33.979361   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:33.982172   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:34.212318   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:34.477735   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:34.480173   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:34.711862   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:35.115693   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:35.117079   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:35.212099   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:35.480281   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:35.481979   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:35.717482   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:35.982870   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:35.984774   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:36.211315   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:36.479180   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:36.481099   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:36.714470   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:37.267041   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:37.267388   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:37.267868   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:37.479766   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:37.481784   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:37.711957   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:37.980775   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:37.987169   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:38.212310   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:38.481514   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:38.483922   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:38.711601   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:38.984393   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:38.984713   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:39.211641   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:39.479832   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:39.481896   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:39.710848   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:39.979411   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:39.981880   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:40.210897   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:40.479300   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:40.483786   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:40.711707   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:40.981679   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:40.986097   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:41.218305   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:41.478832   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:41.482737   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:41.711341   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:41.978847   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:41.985825   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:42.211312   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:42.479324   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:42.480587   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:42.712987   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:42.978530   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:42.981097   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:43.211464   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:43.769016   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:43.769687   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:43.770090   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:43.981726   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:43.981907   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:44.212083   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:44.481356   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:44.481573   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:44.710948   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:44.980238   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:44.981869   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:45.211549   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:45.479337   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:45.480925   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:45.711511   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:45.984069   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:45.984683   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:46.212584   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:46.480033   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:46.481453   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:46.712931   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:46.978934   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:46.981133   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:47.213023   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:47.480170   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:47.482057   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:47.711053   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:47.982155   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:47.982294   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:48.211734   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:48.478809   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:48.481093   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:48.711652   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:48.980730   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:48.982901   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:49.211337   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:49.478476   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:49.480936   15646 kapi.go:107] duration metric: took 56.004438949s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1014 13:40:49.712012   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:49.979658   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:50.211831   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:50.479051   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:50.711987   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:50.978857   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:51.211699   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:51.480549   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:51.711830   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:51.979513   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:52.211432   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:52.478805   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:52.711486   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:52.978571   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:53.211626   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:53.479364   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:53.712268   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:53.978935   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:54.211762   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:54.479785   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:54.711538   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:54.978635   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:55.211320   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:55.478282   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:55.710475   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:55.979124   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:56.212135   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:56.761865   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:56.762715   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:56.979664   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:57.212590   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:57.478572   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:57.711184   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:57.978833   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:58.211916   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:58.479601   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:58.711242   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:58.978449   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:59.210610   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:59.478965   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:59.711442   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:40:59.978389   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:00.211864   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:00.478906   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:00.711503   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:00.978477   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:01.211247   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:01.478685   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:01.713698   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:01.979942   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:02.211284   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:02.478528   15646 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:41:02.712225   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:02.978513   15646 kapi.go:107] duration metric: took 1m12.004438023s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1014 13:41:03.210854   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:04.097307   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:04.211572   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:04.714767   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:05.211623   15646 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:05.712077   15646 kapi.go:107] duration metric: took 1m11.004439313s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1014 13:41:05.714109   15646 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-313496 cluster.
	I1014 13:41:05.715590   15646 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1014 13:41:05.716887   15646 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1014 13:41:05.718242   15646 out.go:177] * Enabled addons: cloud-spanner, default-storageclass, storage-provisioner, amd-gpu-device-plugin, ingress-dns, nvidia-device-plugin, metrics-server, yakd, inspektor-gadget, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1014 13:41:05.719353   15646 addons.go:510] duration metric: took 1m23.515638899s for enable addons: enabled=[cloud-spanner default-storageclass storage-provisioner amd-gpu-device-plugin ingress-dns nvidia-device-plugin metrics-server yakd inspektor-gadget storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1014 13:41:05.719396   15646 start.go:246] waiting for cluster config update ...
	I1014 13:41:05.719414   15646 start.go:255] writing updated cluster config ...
	I1014 13:41:05.719653   15646 ssh_runner.go:195] Run: rm -f paused
	I1014 13:41:05.769889   15646 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1014 13:41:05.771798   15646 out.go:177] * Done! kubectl is now configured to use "addons-313496" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 14 13:46:52 addons-313496 crio[664]: time="2024-10-14 13:46:52.438436080Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913612438409979,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0dcb2f30-0897-4bd1-8402-97b1130088e5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 13:46:52 addons-313496 crio[664]: time="2024-10-14 13:46:52.439070607Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d28a85b3-bf0a-42dc-8b29-f452a006eb58 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 13:46:52 addons-313496 crio[664]: time="2024-10-14 13:46:52.439134816Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d28a85b3-bf0a-42dc-8b29-f452a006eb58 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 13:46:52 addons-313496 crio[664]: time="2024-10-14 13:46:52.439393332Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f4275239fa5deb9d923557818a80b5ac4db65d511c9aac19dced2c430255a102,PodSandboxId:bcff4297578ca629c3138e6a6fa44e866d894873d43718ace9c1fcdfc687150c,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1728913462194230126,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-qln9q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbceeaae-a919-4e5e-add2-814748d5c2b5,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f983b17f3b58c0ccc807665bb150dd0fd83205078419705f8ab89af46ea509d5,PodSandboxId:fbb7032f79d11573d88343478e76ae448632f1e596bb42e495fe3f460754d12f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728913321384197713,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 32314337-afcd-4dcd-9ee8-4d9c09bdfb5a,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac8e2b68a252630090aa1fe23ae5249075ff564520368809c8866fbb63136c55,PodSandboxId:eb4c935e951be22e2fed7e5c06aee766bb1e990f3681f94d7c9824a5423eca7f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728913268926283306,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d0455ab0-9aab-459a-9
53a-f53376cb4884,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcdc01415b8f4b781bde0511dc84f01035dbe6cb5ac43c008ff0b20bcd9ce15a,PodSandboxId:f3521b0c24d462e590e152b4c574a2b8075a65a81f3c9569ddebdb25f3b8c6c4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728913218088192076,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-cggcl,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 33ed4d65-0bcf-4a12-beaf-298d4c5f2714,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:459e3a06aa53790bbc9829c134214c4c8b134441436eb0d3bcb6db79ec3ba3a2,PodSandboxId:f5cc832ca4671754bbc6adfad2f5d28674cd5c1cc9a028e4e8f4cc5e2e561110,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1728913193031013172,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-m9mtz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fc02ee9-2529-4893-abc3-e638a461db45,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65a4291d6c524d1dde5edcb65a98eed8b24ee7acf960c9a24d17f36e05ce41e4,PodSandboxId:492b76f2295273c54c89048f39730b28266aead8467f44dab313a54f217366d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728913190194009961,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ad1bb99-d287-4642-957b-3d383adfa12a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:616361d8d4378804e957eb4b6028aa2b7a1f4a55fa64d33c24b4320f1c5a8039,PodSandboxId:498ead475996e476f08f721e935cb0c99763dd1dc09d7b528ef2849c4c6b0ce3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728913186178198589,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-69r77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c55ebf0-8189-43c8-b05c-375564deee96,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d61ff151a442fc51b211a8fee95c81ae65ea90e27704e2a58afae2cf5b6d965,PodSandboxId:462d804a2d40db1adb8f50108873709a35906f70d9186186ece04276e613ccdf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728913183219147008,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7zvnt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 357a51d7-a6c0-4616-aef2-fe9c7074e51d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:000c7b368fb0c82a3afd37bffaa28fb1bcb88ca467dacf69ea3fcbe6feb37a89,PodSandboxId:e2a9cfb1ac818cc702114d27ffb5475eb8f70a2cf047bed2c34472defa52d053,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad9
41575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728913171710847173,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-313496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c022b43dc66cedfe18ccd6d32d8af007,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:340c28a59e7bb3430fe29720dfde756e460c4bcea8862296fe9665759230f850,PodSandboxId:da5dcdd6ab62e7b6bc746d0dc4ea3b274e74adcc4f1fc5d34bbea87b80eee43d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1d
ecfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728913171690907900,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-313496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c80582d956904040c35d86bede218e47,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd2a8e9b921aae625ee640bf6c996300e93556e2e3515bcb8c001b5575f0e96e,PodSandboxId:87f5cf26c9d0f21b4718ba5f1e3f94da0b996e38f8903b61b925973a1298f862,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da7
92cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728913171690135701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-313496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2c09aa55b52be8f1723b976f9c32a9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04882c90388135e7a0ca7695b407a07f1bd0c7b335ab40d90edc9c65f61e824d,PodSandboxId:8ae6efa4db7d1ee3e854b21e380613aed26bf0aefd933aa537bfe7bde7566d93,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER
_RUNNING,CreatedAt:1728913171668258337,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-313496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36eed9f38a68b4bc38d97c43ebca6b86,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d28a85b3-bf0a-42dc-8b29-f452a006eb58 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 13:46:52 addons-313496 crio[664]: time="2024-10-14 13:46:52.481066852Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=995d3fcf-268d-485f-be80-1e7c1e6d89b2 name=/runtime.v1.RuntimeService/Version
	Oct 14 13:46:52 addons-313496 crio[664]: time="2024-10-14 13:46:52.481150297Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=995d3fcf-268d-485f-be80-1e7c1e6d89b2 name=/runtime.v1.RuntimeService/Version
	Oct 14 13:46:52 addons-313496 crio[664]: time="2024-10-14 13:46:52.483060009Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=88b44613-2222-4b1f-9b2e-4c569a779808 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 13:46:52 addons-313496 crio[664]: time="2024-10-14 13:46:52.484386579Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913612484357544,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=88b44613-2222-4b1f-9b2e-4c569a779808 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 13:46:52 addons-313496 crio[664]: time="2024-10-14 13:46:52.484949934Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=16d6a3bd-0ff7-4778-b81f-709ecb7161f5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 13:46:52 addons-313496 crio[664]: time="2024-10-14 13:46:52.485027281Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=16d6a3bd-0ff7-4778-b81f-709ecb7161f5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 13:46:52 addons-313496 crio[664]: time="2024-10-14 13:46:52.485324968Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f4275239fa5deb9d923557818a80b5ac4db65d511c9aac19dced2c430255a102,PodSandboxId:bcff4297578ca629c3138e6a6fa44e866d894873d43718ace9c1fcdfc687150c,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1728913462194230126,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-qln9q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbceeaae-a919-4e5e-add2-814748d5c2b5,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f983b17f3b58c0ccc807665bb150dd0fd83205078419705f8ab89af46ea509d5,PodSandboxId:fbb7032f79d11573d88343478e76ae448632f1e596bb42e495fe3f460754d12f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728913321384197713,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 32314337-afcd-4dcd-9ee8-4d9c09bdfb5a,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac8e2b68a252630090aa1fe23ae5249075ff564520368809c8866fbb63136c55,PodSandboxId:eb4c935e951be22e2fed7e5c06aee766bb1e990f3681f94d7c9824a5423eca7f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728913268926283306,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d0455ab0-9aab-459a-9
53a-f53376cb4884,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcdc01415b8f4b781bde0511dc84f01035dbe6cb5ac43c008ff0b20bcd9ce15a,PodSandboxId:f3521b0c24d462e590e152b4c574a2b8075a65a81f3c9569ddebdb25f3b8c6c4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728913218088192076,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-cggcl,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 33ed4d65-0bcf-4a12-beaf-298d4c5f2714,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:459e3a06aa53790bbc9829c134214c4c8b134441436eb0d3bcb6db79ec3ba3a2,PodSandboxId:f5cc832ca4671754bbc6adfad2f5d28674cd5c1cc9a028e4e8f4cc5e2e561110,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1728913193031013172,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-m9mtz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fc02ee9-2529-4893-abc3-e638a461db45,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65a4291d6c524d1dde5edcb65a98eed8b24ee7acf960c9a24d17f36e05ce41e4,PodSandboxId:492b76f2295273c54c89048f39730b28266aead8467f44dab313a54f217366d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728913190194009961,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ad1bb99-d287-4642-957b-3d383adfa12a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:616361d8d4378804e957eb4b6028aa2b7a1f4a55fa64d33c24b4320f1c5a8039,PodSandboxId:498ead475996e476f08f721e935cb0c99763dd1dc09d7b528ef2849c4c6b0ce3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728913186178198589,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-69r77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c55ebf0-8189-43c8-b05c-375564deee96,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d61ff151a442fc51b211a8fee95c81ae65ea90e27704e2a58afae2cf5b6d965,PodSandboxId:462d804a2d40db1adb8f50108873709a35906f70d9186186ece04276e613ccdf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728913183219147008,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7zvnt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 357a51d7-a6c0-4616-aef2-fe9c7074e51d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:000c7b368fb0c82a3afd37bffaa28fb1bcb88ca467dacf69ea3fcbe6feb37a89,PodSandboxId:e2a9cfb1ac818cc702114d27ffb5475eb8f70a2cf047bed2c34472defa52d053,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad9
41575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728913171710847173,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-313496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c022b43dc66cedfe18ccd6d32d8af007,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:340c28a59e7bb3430fe29720dfde756e460c4bcea8862296fe9665759230f850,PodSandboxId:da5dcdd6ab62e7b6bc746d0dc4ea3b274e74adcc4f1fc5d34bbea87b80eee43d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1d
ecfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728913171690907900,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-313496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c80582d956904040c35d86bede218e47,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd2a8e9b921aae625ee640bf6c996300e93556e2e3515bcb8c001b5575f0e96e,PodSandboxId:87f5cf26c9d0f21b4718ba5f1e3f94da0b996e38f8903b61b925973a1298f862,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da7
92cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728913171690135701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-313496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2c09aa55b52be8f1723b976f9c32a9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04882c90388135e7a0ca7695b407a07f1bd0c7b335ab40d90edc9c65f61e824d,PodSandboxId:8ae6efa4db7d1ee3e854b21e380613aed26bf0aefd933aa537bfe7bde7566d93,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER
_RUNNING,CreatedAt:1728913171668258337,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-313496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36eed9f38a68b4bc38d97c43ebca6b86,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=16d6a3bd-0ff7-4778-b81f-709ecb7161f5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 13:46:52 addons-313496 crio[664]: time="2024-10-14 13:46:52.523076341Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=84a2b48c-3973-4ca2-8134-115dff6f99c9 name=/runtime.v1.RuntimeService/Version
	Oct 14 13:46:52 addons-313496 crio[664]: time="2024-10-14 13:46:52.523172167Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=84a2b48c-3973-4ca2-8134-115dff6f99c9 name=/runtime.v1.RuntimeService/Version
	Oct 14 13:46:52 addons-313496 crio[664]: time="2024-10-14 13:46:52.524220953Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c2c36f9b-4097-45c8-ace1-ffe55d5bbf7a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 13:46:52 addons-313496 crio[664]: time="2024-10-14 13:46:52.525744618Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913612525717528,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c2c36f9b-4097-45c8-ace1-ffe55d5bbf7a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 13:46:52 addons-313496 crio[664]: time="2024-10-14 13:46:52.526394349Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bacd41f9-6710-403f-a42d-24cdac634f95 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 13:46:52 addons-313496 crio[664]: time="2024-10-14 13:46:52.526465403Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bacd41f9-6710-403f-a42d-24cdac634f95 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 13:46:52 addons-313496 crio[664]: time="2024-10-14 13:46:52.526776072Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f4275239fa5deb9d923557818a80b5ac4db65d511c9aac19dced2c430255a102,PodSandboxId:bcff4297578ca629c3138e6a6fa44e866d894873d43718ace9c1fcdfc687150c,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1728913462194230126,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-qln9q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbceeaae-a919-4e5e-add2-814748d5c2b5,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f983b17f3b58c0ccc807665bb150dd0fd83205078419705f8ab89af46ea509d5,PodSandboxId:fbb7032f79d11573d88343478e76ae448632f1e596bb42e495fe3f460754d12f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728913321384197713,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 32314337-afcd-4dcd-9ee8-4d9c09bdfb5a,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac8e2b68a252630090aa1fe23ae5249075ff564520368809c8866fbb63136c55,PodSandboxId:eb4c935e951be22e2fed7e5c06aee766bb1e990f3681f94d7c9824a5423eca7f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728913268926283306,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d0455ab0-9aab-459a-9
53a-f53376cb4884,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcdc01415b8f4b781bde0511dc84f01035dbe6cb5ac43c008ff0b20bcd9ce15a,PodSandboxId:f3521b0c24d462e590e152b4c574a2b8075a65a81f3c9569ddebdb25f3b8c6c4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728913218088192076,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-cggcl,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 33ed4d65-0bcf-4a12-beaf-298d4c5f2714,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:459e3a06aa53790bbc9829c134214c4c8b134441436eb0d3bcb6db79ec3ba3a2,PodSandboxId:f5cc832ca4671754bbc6adfad2f5d28674cd5c1cc9a028e4e8f4cc5e2e561110,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1728913193031013172,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-m9mtz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fc02ee9-2529-4893-abc3-e638a461db45,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65a4291d6c524d1dde5edcb65a98eed8b24ee7acf960c9a24d17f36e05ce41e4,PodSandboxId:492b76f2295273c54c89048f39730b28266aead8467f44dab313a54f217366d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728913190194009961,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ad1bb99-d287-4642-957b-3d383adfa12a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:616361d8d4378804e957eb4b6028aa2b7a1f4a55fa64d33c24b4320f1c5a8039,PodSandboxId:498ead475996e476f08f721e935cb0c99763dd1dc09d7b528ef2849c4c6b0ce3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728913186178198589,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-69r77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c55ebf0-8189-43c8-b05c-375564deee96,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d61ff151a442fc51b211a8fee95c81ae65ea90e27704e2a58afae2cf5b6d965,PodSandboxId:462d804a2d40db1adb8f50108873709a35906f70d9186186ece04276e613ccdf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728913183219147008,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7zvnt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 357a51d7-a6c0-4616-aef2-fe9c7074e51d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:000c7b368fb0c82a3afd37bffaa28fb1bcb88ca467dacf69ea3fcbe6feb37a89,PodSandboxId:e2a9cfb1ac818cc702114d27ffb5475eb8f70a2cf047bed2c34472defa52d053,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad9
41575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728913171710847173,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-313496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c022b43dc66cedfe18ccd6d32d8af007,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:340c28a59e7bb3430fe29720dfde756e460c4bcea8862296fe9665759230f850,PodSandboxId:da5dcdd6ab62e7b6bc746d0dc4ea3b274e74adcc4f1fc5d34bbea87b80eee43d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1d
ecfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728913171690907900,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-313496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c80582d956904040c35d86bede218e47,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd2a8e9b921aae625ee640bf6c996300e93556e2e3515bcb8c001b5575f0e96e,PodSandboxId:87f5cf26c9d0f21b4718ba5f1e3f94da0b996e38f8903b61b925973a1298f862,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da7
92cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728913171690135701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-313496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2c09aa55b52be8f1723b976f9c32a9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04882c90388135e7a0ca7695b407a07f1bd0c7b335ab40d90edc9c65f61e824d,PodSandboxId:8ae6efa4db7d1ee3e854b21e380613aed26bf0aefd933aa537bfe7bde7566d93,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER
_RUNNING,CreatedAt:1728913171668258337,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-313496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36eed9f38a68b4bc38d97c43ebca6b86,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bacd41f9-6710-403f-a42d-24cdac634f95 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 13:46:52 addons-313496 crio[664]: time="2024-10-14 13:46:52.561171003Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e22ff078-513b-442b-a5cb-c483738219a7 name=/runtime.v1.RuntimeService/Version
	Oct 14 13:46:52 addons-313496 crio[664]: time="2024-10-14 13:46:52.561260139Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e22ff078-513b-442b-a5cb-c483738219a7 name=/runtime.v1.RuntimeService/Version
	Oct 14 13:46:52 addons-313496 crio[664]: time="2024-10-14 13:46:52.562489689Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e8386fad-68d9-4338-9037-2eb25b49c482 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 13:46:52 addons-313496 crio[664]: time="2024-10-14 13:46:52.564010075Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913612563984076,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e8386fad-68d9-4338-9037-2eb25b49c482 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 13:46:52 addons-313496 crio[664]: time="2024-10-14 13:46:52.564666903Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ca770f6-9b26-49a2-817d-0eba4fe53a58 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 13:46:52 addons-313496 crio[664]: time="2024-10-14 13:46:52.564740159Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ca770f6-9b26-49a2-817d-0eba4fe53a58 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 13:46:52 addons-313496 crio[664]: time="2024-10-14 13:46:52.564990655Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f4275239fa5deb9d923557818a80b5ac4db65d511c9aac19dced2c430255a102,PodSandboxId:bcff4297578ca629c3138e6a6fa44e866d894873d43718ace9c1fcdfc687150c,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1728913462194230126,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-qln9q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: bbceeaae-a919-4e5e-add2-814748d5c2b5,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f983b17f3b58c0ccc807665bb150dd0fd83205078419705f8ab89af46ea509d5,PodSandboxId:fbb7032f79d11573d88343478e76ae448632f1e596bb42e495fe3f460754d12f,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045,State:CONTAINER_RUNNING,CreatedAt:1728913321384197713,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 32314337-afcd-4dcd-9ee8-4d9c09bdfb5a,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac8e2b68a252630090aa1fe23ae5249075ff564520368809c8866fbb63136c55,PodSandboxId:eb4c935e951be22e2fed7e5c06aee766bb1e990f3681f94d7c9824a5423eca7f,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728913268926283306,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: d0455ab0-9aab-459a-9
53a-f53376cb4884,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fcdc01415b8f4b781bde0511dc84f01035dbe6cb5ac43c008ff0b20bcd9ce15a,PodSandboxId:f3521b0c24d462e590e152b4c574a2b8075a65a81f3c9569ddebdb25f3b8c6c4,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1728913218088192076,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-cggcl,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 33ed4d65-0bcf-4a12-beaf-298d4c5f2714,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:459e3a06aa53790bbc9829c134214c4c8b134441436eb0d3bcb6db79ec3ba3a2,PodSandboxId:f5cc832ca4671754bbc6adfad2f5d28674cd5c1cc9a028e4e8f4cc5e2e561110,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1728913193031013172,Labels:map[string]string{io.kube
rnetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-m9mtz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fc02ee9-2529-4893-abc3-e638a461db45,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65a4291d6c524d1dde5edcb65a98eed8b24ee7acf960c9a24d17f36e05ce41e4,PodSandboxId:492b76f2295273c54c89048f39730b28266aead8467f44dab313a54f217366d5,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728913190194009961,Labels:map[string]string{io.kubern
etes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ad1bb99-d287-4642-957b-3d383adfa12a,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:616361d8d4378804e957eb4b6028aa2b7a1f4a55fa64d33c24b4320f1c5a8039,PodSandboxId:498ead475996e476f08f721e935cb0c99763dd1dc09d7b528ef2849c4c6b0ce3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728913186178198589,Labels:map[string]string{io.kubernetes.container.name: cor
edns,io.kubernetes.pod.name: coredns-7c65d6cfc9-69r77,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c55ebf0-8189-43c8-b05c-375564deee96,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d61ff151a442fc51b211a8fee95c81ae65ea90e27704e2a58afae2cf5b6d965,PodSandboxId:462d804a2d40db1adb8f50108873709a35906f70d9186186ece04276e613ccdf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,Runtim
eHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728913183219147008,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-7zvnt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 357a51d7-a6c0-4616-aef2-fe9c7074e51d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:000c7b368fb0c82a3afd37bffaa28fb1bcb88ca467dacf69ea3fcbe6feb37a89,PodSandboxId:e2a9cfb1ac818cc702114d27ffb5475eb8f70a2cf047bed2c34472defa52d053,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad9
41575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728913171710847173,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-313496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c022b43dc66cedfe18ccd6d32d8af007,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:340c28a59e7bb3430fe29720dfde756e460c4bcea8862296fe9665759230f850,PodSandboxId:da5dcdd6ab62e7b6bc746d0dc4ea3b274e74adcc4f1fc5d34bbea87b80eee43d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1d
ecfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728913171690907900,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-313496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c80582d956904040c35d86bede218e47,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd2a8e9b921aae625ee640bf6c996300e93556e2e3515bcb8c001b5575f0e96e,PodSandboxId:87f5cf26c9d0f21b4718ba5f1e3f94da0b996e38f8903b61b925973a1298f862,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da7
92cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728913171690135701,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-313496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c2c09aa55b52be8f1723b976f9c32a9,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04882c90388135e7a0ca7695b407a07f1bd0c7b335ab40d90edc9c65f61e824d,PodSandboxId:8ae6efa4db7d1ee3e854b21e380613aed26bf0aefd933aa537bfe7bde7566d93,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER
_RUNNING,CreatedAt:1728913171668258337,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-313496,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36eed9f38a68b4bc38d97c43ebca6b86,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ca770f6-9b26-49a2-817d-0eba4fe53a58 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f4275239fa5de       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   bcff4297578ca       hello-world-app-55bf9c44b4-qln9q
	f983b17f3b58c       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                         4 minutes ago       Running             nginx                     0                   fbb7032f79d11       nginx
	ac8e2b68a2526       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     5 minutes ago       Running             busybox                   0                   eb4c935e951be       busybox
	fcdc01415b8f4       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   6 minutes ago       Running             metrics-server            0                   f3521b0c24d46       metrics-server-84c5f94fbc-cggcl
	459e3a06aa537       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                6 minutes ago       Running             amd-gpu-device-plugin     0                   f5cc832ca4671       amd-gpu-device-plugin-m9mtz
	65a4291d6c524       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   492b76f229527       storage-provisioner
	616361d8d4378       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        7 minutes ago       Running             coredns                   0                   498ead475996e       coredns-7c65d6cfc9-69r77
	9d61ff151a442       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                        7 minutes ago       Running             kube-proxy                0                   462d804a2d40d       kube-proxy-7zvnt
	000c7b368fb0c       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                        7 minutes ago       Running             kube-scheduler            0                   e2a9cfb1ac818       kube-scheduler-addons-313496
	340c28a59e7bb       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                        7 minutes ago       Running             kube-apiserver            0                   da5dcdd6ab62e       kube-apiserver-addons-313496
	fd2a8e9b921aa       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        7 minutes ago       Running             etcd                      0                   87f5cf26c9d0f       etcd-addons-313496
	04882c9038813       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                        7 minutes ago       Running             kube-controller-manager   0                   8ae6efa4db7d1       kube-controller-manager-addons-313496
	
	
	==> coredns [616361d8d4378804e957eb4b6028aa2b7a1f4a55fa64d33c24b4320f1c5a8039] <==
	[INFO] 10.244.0.22:56193 - 44781 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00007572s
	[INFO] 10.244.0.22:56193 - 22577 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000059008s
	[INFO] 10.244.0.22:56193 - 55391 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000069549s
	[INFO] 10.244.0.22:56193 - 24259 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.0000962s
	[INFO] 10.244.0.22:56969 - 29879 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000085214s
	[INFO] 10.244.0.22:56969 - 45466 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000119358s
	[INFO] 10.244.0.22:56969 - 24847 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000057941s
	[INFO] 10.244.0.22:56969 - 30943 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000052283s
	[INFO] 10.244.0.22:56969 - 28633 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000055916s
	[INFO] 10.244.0.22:56969 - 47747 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00006689s
	[INFO] 10.244.0.22:56969 - 19952 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000076788s
	[INFO] 10.244.0.22:56497 - 43253 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000129541s
	[INFO] 10.244.0.22:55702 - 36946 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000053289s
	[INFO] 10.244.0.22:56497 - 6703 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000053944s
	[INFO] 10.244.0.22:56497 - 59022 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000090981s
	[INFO] 10.244.0.22:56497 - 1810 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000076559s
	[INFO] 10.244.0.22:55702 - 61139 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000089984s
	[INFO] 10.244.0.22:56497 - 26504 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000070979s
	[INFO] 10.244.0.22:56497 - 41814 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000072931s
	[INFO] 10.244.0.22:55702 - 59514 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000141225s
	[INFO] 10.244.0.22:55702 - 6574 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000063587s
	[INFO] 10.244.0.22:56497 - 50603 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000100521s
	[INFO] 10.244.0.22:55702 - 61081 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000048146s
	[INFO] 10.244.0.22:55702 - 22276 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000032502s
	[INFO] 10.244.0.22:55702 - 57323 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000040911s
	
	
	==> describe nodes <==
	Name:               addons-313496
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-313496
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=addons-313496
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_14T13_39_37_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-313496
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 13:39:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-313496
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 13:46:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 13:44:43 +0000   Mon, 14 Oct 2024 13:39:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 13:44:43 +0000   Mon, 14 Oct 2024 13:39:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 13:44:43 +0000   Mon, 14 Oct 2024 13:39:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 13:44:43 +0000   Mon, 14 Oct 2024 13:39:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.177
	  Hostname:    addons-313496
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 293dfa51674b4a789ea1e2204c6437a9
	  System UUID:                293dfa51-674b-4a78-9ea1-e2204c6437a9
	  Boot ID:                    75724930-219f-4ba1-a96c-8f16884c2e8f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m46s
	  default                     hello-world-app-55bf9c44b4-qln9q         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m32s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 amd-gpu-device-plugin-m9mtz              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m8s
	  kube-system                 coredns-7c65d6cfc9-69r77                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m10s
	  kube-system                 etcd-addons-313496                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         7m15s
	  kube-system                 kube-apiserver-addons-313496             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m15s
	  kube-system                 kube-controller-manager-addons-313496    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m15s
	  kube-system                 kube-proxy-7zvnt                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m10s
	  kube-system                 kube-scheduler-addons-313496             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m15s
	  kube-system                 metrics-server-84c5f94fbc-cggcl          100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         7m4s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m8s   kube-proxy       
	  Normal  Starting                 7m15s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m15s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m15s  kubelet          Node addons-313496 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m15s  kubelet          Node addons-313496 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m15s  kubelet          Node addons-313496 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m14s  kubelet          Node addons-313496 status is now: NodeReady
	  Normal  RegisteredNode           7m11s  node-controller  Node addons-313496 event: Registered Node addons-313496 in Controller
	
	
	==> dmesg <==
	[  +0.091241] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.326380] systemd-fstab-generator[1341]: Ignoring "noauto" option for root device
	[  +0.176653] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.075864] kauditd_printk_skb: 115 callbacks suppressed
	[  +5.000896] kauditd_printk_skb: 133 callbacks suppressed
	[  +6.173920] kauditd_printk_skb: 89 callbacks suppressed
	[Oct14 13:40] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.016490] kauditd_printk_skb: 2 callbacks suppressed
	[ +10.499188] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.430837] kauditd_printk_skb: 34 callbacks suppressed
	[  +6.011973] kauditd_printk_skb: 28 callbacks suppressed
	[ +12.158324] kauditd_printk_skb: 3 callbacks suppressed
	[Oct14 13:41] kauditd_printk_skb: 16 callbacks suppressed
	[  +9.491800] kauditd_printk_skb: 9 callbacks suppressed
	[ +13.776408] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.404069] kauditd_printk_skb: 8 callbacks suppressed
	[  +5.670842] kauditd_printk_skb: 40 callbacks suppressed
	[  +6.004383] kauditd_printk_skb: 63 callbacks suppressed
	[  +6.962880] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.545232] kauditd_printk_skb: 3 callbacks suppressed
	[Oct14 13:42] kauditd_printk_skb: 25 callbacks suppressed
	[ +16.195774] kauditd_printk_skb: 6 callbacks suppressed
	[  +6.875442] kauditd_printk_skb: 7 callbacks suppressed
	[Oct14 13:44] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.225891] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [fd2a8e9b921aae625ee640bf6c996300e93556e2e3515bcb8c001b5575f0e96e] <==
	{"level":"info","ts":"2024-10-14T13:41:04.081912Z","caller":"traceutil/trace.go:171","msg":"trace[212189526] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1110; }","duration":"314.960895ms","start":"2024-10-14T13:41:03.766945Z","end":"2024-10-14T13:41:04.081906Z","steps":["trace[212189526] 'range keys from in-memory index tree'  (duration: 313.644148ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T13:41:04.080828Z","caller":"traceutil/trace.go:171","msg":"trace[1059839605] linearizableReadLoop","detail":"{readStateIndex:1142; appliedIndex:1141; }","duration":"334.408737ms","start":"2024-10-14T13:41:03.746408Z","end":"2024-10-14T13:41:04.080817Z","steps":["trace[1059839605] 'read index received'  (duration: 276.946998ms)","trace[1059839605] 'applied index is now lower than readState.Index'  (duration: 57.461315ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-14T13:41:04.080928Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"334.516743ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-14T13:41:04.081985Z","caller":"traceutil/trace.go:171","msg":"trace[63410484] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1111; }","duration":"335.577994ms","start":"2024-10-14T13:41:03.746402Z","end":"2024-10-14T13:41:04.081980Z","steps":["trace[63410484] 'agreement among raft nodes before linearized reading'  (duration: 334.471671ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T13:41:04.082041Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-14T13:41:03.746367Z","time spent":"335.662492ms","remote":"127.0.0.1:34300","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"warn","ts":"2024-10-14T13:41:04.082170Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"310.999478ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-14T13:41:04.082205Z","caller":"traceutil/trace.go:171","msg":"trace[1149194388] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1112; }","duration":"311.035075ms","start":"2024-10-14T13:41:03.771165Z","end":"2024-10-14T13:41:04.082200Z","steps":["trace[1149194388] 'agreement among raft nodes before linearized reading'  (duration: 310.987624ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T13:41:04.082221Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-14T13:41:03.771133Z","time spent":"311.084458ms","remote":"127.0.0.1:34122","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-10-14T13:41:04.081096Z","caller":"traceutil/trace.go:171","msg":"trace[1341300446] transaction","detail":"{read_only:false; response_revision:1111; number_of_response:1; }","duration":"334.804484ms","start":"2024-10-14T13:41:03.746226Z","end":"2024-10-14T13:41:04.081030Z","steps":["trace[1341300446] 'process raft request'  (duration: 277.12123ms)","trace[1341300446] 'compare'  (duration: 56.924346ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-14T13:41:04.082712Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"202.571604ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-10-14T13:41:04.082759Z","caller":"traceutil/trace.go:171","msg":"trace[1495775215] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1112; }","duration":"202.620784ms","start":"2024-10-14T13:41:03.880131Z","end":"2024-10-14T13:41:04.082752Z","steps":["trace[1495775215] 'agreement among raft nodes before linearized reading'  (duration: 202.342253ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T13:41:04.083026Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"213.261453ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-14T13:41:04.083066Z","caller":"traceutil/trace.go:171","msg":"trace[688169331] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1112; }","duration":"213.302181ms","start":"2024-10-14T13:41:03.869758Z","end":"2024-10-14T13:41:04.083060Z","steps":["trace[688169331] 'agreement among raft nodes before linearized reading'  (duration: 213.252729ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T13:41:04.083579Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-14T13:41:03.746210Z","time spent":"336.106762ms","remote":"127.0.0.1:34394","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":486,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" mod_revision:1110 > success:<request_put:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" value_size:427 >> failure:<request_range:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" > >"}
	{"level":"info","ts":"2024-10-14T13:41:04.081151Z","caller":"traceutil/trace.go:171","msg":"trace[738532267] transaction","detail":"{read_only:false; response_revision:1112; number_of_response:1; }","duration":"318.155325ms","start":"2024-10-14T13:41:03.762987Z","end":"2024-10-14T13:41:04.081142Z","steps":["trace[738532267] 'process raft request'  (duration: 317.796221ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T13:41:04.083932Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-14T13:41:03.762971Z","time spent":"320.855684ms","remote":"127.0.0.1:34196","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":782,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/ingress-nginx/ingress-nginx-controller-5f85ff4588-xxf5h.17fe557405bf9b0d\" mod_revision:0 > success:<request_put:<key:\"/registry/events/ingress-nginx/ingress-nginx-controller-5f85ff4588-xxf5h.17fe557405bf9b0d\" value_size:675 lease:8080181808251469247 >> failure:<>"}
	{"level":"info","ts":"2024-10-14T13:41:35.967283Z","caller":"traceutil/trace.go:171","msg":"trace[539375714] linearizableReadLoop","detail":"{readStateIndex:1328; appliedIndex:1327; }","duration":"199.901265ms","start":"2024-10-14T13:41:35.767369Z","end":"2024-10-14T13:41:35.967270Z","steps":["trace[539375714] 'read index received'  (duration: 199.745848ms)","trace[539375714] 'applied index is now lower than readState.Index'  (duration: 155.013µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-14T13:41:35.967494Z","caller":"traceutil/trace.go:171","msg":"trace[1271355131] transaction","detail":"{read_only:false; response_revision:1291; number_of_response:1; }","duration":"369.274456ms","start":"2024-10-14T13:41:35.598211Z","end":"2024-10-14T13:41:35.967485Z","steps":["trace[1271355131] 'process raft request'  (duration: 368.944168ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T13:41:35.967572Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-14T13:41:35.598197Z","time spent":"369.325745ms","remote":"127.0.0.1:34282","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1287 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-10-14T13:41:35.967752Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"200.380393ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-10-14T13:41:35.968813Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.222397ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-14T13:41:35.968847Z","caller":"traceutil/trace.go:171","msg":"trace[1352015729] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1291; }","duration":"101.26558ms","start":"2024-10-14T13:41:35.867573Z","end":"2024-10-14T13:41:35.968839Z","steps":["trace[1352015729] 'agreement among raft nodes before linearized reading'  (duration: 101.164144ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T13:41:35.969105Z","caller":"traceutil/trace.go:171","msg":"trace[38771890] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1291; }","duration":"201.733561ms","start":"2024-10-14T13:41:35.767363Z","end":"2024-10-14T13:41:35.969096Z","steps":["trace[38771890] 'agreement among raft nodes before linearized reading'  (duration: 200.367001ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T13:42:06.637050Z","caller":"traceutil/trace.go:171","msg":"trace[237475859] transaction","detail":"{read_only:false; response_revision:1581; number_of_response:1; }","duration":"166.724719ms","start":"2024-10-14T13:42:06.470301Z","end":"2024-10-14T13:42:06.637026Z","steps":["trace[237475859] 'process raft request'  (duration: 166.587294ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T13:42:44.580165Z","caller":"traceutil/trace.go:171","msg":"trace[1709847578] transaction","detail":"{read_only:false; response_revision:1796; number_of_response:1; }","duration":"194.608499ms","start":"2024-10-14T13:42:44.385528Z","end":"2024-10-14T13:42:44.580137Z","steps":["trace[1709847578] 'process raft request'  (duration: 194.43891ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:46:52 up 7 min,  0 users,  load average: 0.07, 0.73, 0.54
	Linux addons-313496 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [340c28a59e7bb3430fe29720dfde756e460c4bcea8862296fe9665759230f850] <==
	E1014 13:41:28.823164       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.220.222:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.220.222:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.220.222:443: connect: connection refused" logger="UnhandledError"
	E1014 13:41:28.829081       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.220.222:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.220.222:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.220.222:443: connect: connection refused" logger="UnhandledError"
	I1014 13:41:28.893723       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1014 13:41:31.311995       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.23.145"}
	I1014 13:41:54.674308       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1014 13:41:55.799257       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1014 13:42:00.212091       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1014 13:42:00.409965       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.104.226.119"}
	E1014 13:42:01.940023       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1014 13:42:14.892582       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1014 13:42:33.792996       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1014 13:42:33.793081       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1014 13:42:33.867101       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1014 13:42:33.867198       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1014 13:42:33.915907       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1014 13:42:33.916018       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1014 13:42:33.946196       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1014 13:42:33.946286       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1014 13:42:33.971503       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1014 13:42:33.971552       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1014 13:42:34.948418       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1014 13:42:34.972457       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1014 13:42:35.093897       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1014 13:44:20.903281       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.101.219.121"}
	E1014 13:44:24.552380       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	
	
	==> kube-controller-manager [04882c90388135e7a0ca7695b407a07f1bd0c7b335ab40d90edc9c65f61e824d] <==
	E1014 13:44:32.733757       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1014 13:44:35.245030       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	I1014 13:44:43.456146       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-313496"
	W1014 13:44:44.574890       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1014 13:44:44.574955       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1014 13:44:56.648171       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1014 13:44:56.648336       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1014 13:45:15.455048       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1014 13:45:15.455109       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1014 13:45:18.714702       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1014 13:45:18.714807       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1014 13:45:29.388799       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1014 13:45:29.388851       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1014 13:45:36.317417       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1014 13:45:36.317537       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1014 13:45:58.251679       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1014 13:45:58.251979       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1014 13:46:16.191195       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1014 13:46:16.191241       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1014 13:46:26.433294       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1014 13:46:26.433527       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1014 13:46:28.747259       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1014 13:46:28.747327       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1014 13:46:38.121487       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1014 13:46:38.121557       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [9d61ff151a442fc51b211a8fee95c81ae65ea90e27704e2a58afae2cf5b6d965] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1014 13:39:44.152595       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1014 13:39:44.164296       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.177"]
	E1014 13:39:44.164383       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 13:39:44.273083       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1014 13:39:44.273127       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 13:39:44.273150       1 server_linux.go:169] "Using iptables Proxier"
	I1014 13:39:44.276357       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 13:39:44.276697       1 server.go:483] "Version info" version="v1.31.1"
	I1014 13:39:44.276710       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 13:39:44.278437       1 config.go:199] "Starting service config controller"
	I1014 13:39:44.278449       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 13:39:44.278464       1 config.go:105] "Starting endpoint slice config controller"
	I1014 13:39:44.278468       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 13:39:44.278934       1 config.go:328] "Starting node config controller"
	I1014 13:39:44.278940       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 13:39:44.379213       1 shared_informer.go:320] Caches are synced for node config
	I1014 13:39:44.379242       1 shared_informer.go:320] Caches are synced for service config
	I1014 13:39:44.379269       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [000c7b368fb0c82a3afd37bffaa28fb1bcb88ca467dacf69ea3fcbe6feb37a89] <==
	W1014 13:39:34.683172       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1014 13:39:34.683602       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:34.693268       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1014 13:39:34.693403       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1014 13:39:35.576296       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1014 13:39:35.576387       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:35.583227       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1014 13:39:35.583271       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:35.605556       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1014 13:39:35.605594       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:35.655265       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1014 13:39:35.655319       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1014 13:39:35.715388       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1014 13:39:35.715443       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:35.798706       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1014 13:39:35.798757       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:35.887442       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1014 13:39:35.887492       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:35.895907       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1014 13:39:35.896061       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:35.910043       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1014 13:39:35.910143       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:35.939843       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1014 13:39:35.939959       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 13:39:38.063992       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 14 13:45:37 addons-313496 kubelet[1210]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 13:45:37 addons-313496 kubelet[1210]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 13:45:37 addons-313496 kubelet[1210]: E1014 13:45:37.572941    1210 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913537572214110,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:45:37 addons-313496 kubelet[1210]: E1014 13:45:37.573099    1210 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913537572214110,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:45:47 addons-313496 kubelet[1210]: E1014 13:45:47.575908    1210 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913547575545168,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:45:47 addons-313496 kubelet[1210]: E1014 13:45:47.576201    1210 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913547575545168,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:45:57 addons-313496 kubelet[1210]: E1014 13:45:57.579066    1210 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913557578753202,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:45:57 addons-313496 kubelet[1210]: E1014 13:45:57.579119    1210 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913557578753202,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:46:04 addons-313496 kubelet[1210]: I1014 13:46:04.219688    1210 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 14 13:46:07 addons-313496 kubelet[1210]: E1014 13:46:07.581790    1210 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913567581463114,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:46:07 addons-313496 kubelet[1210]: E1014 13:46:07.581842    1210 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913567581463114,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:46:17 addons-313496 kubelet[1210]: E1014 13:46:17.584508    1210 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913577583812735,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:46:17 addons-313496 kubelet[1210]: E1014 13:46:17.584921    1210 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913577583812735,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:46:27 addons-313496 kubelet[1210]: E1014 13:46:27.587994    1210 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913587587237490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:46:27 addons-313496 kubelet[1210]: E1014 13:46:27.588403    1210 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913587587237490,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:46:35 addons-313496 kubelet[1210]: I1014 13:46:35.219085    1210 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-m9mtz" secret="" err="secret \"gcp-auth\" not found"
	Oct 14 13:46:37 addons-313496 kubelet[1210]: E1014 13:46:37.238022    1210 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 13:46:37 addons-313496 kubelet[1210]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 13:46:37 addons-313496 kubelet[1210]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 13:46:37 addons-313496 kubelet[1210]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 13:46:37 addons-313496 kubelet[1210]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 13:46:37 addons-313496 kubelet[1210]: E1014 13:46:37.590873    1210 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913597590404711,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:46:37 addons-313496 kubelet[1210]: E1014 13:46:37.591003    1210 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913597590404711,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:46:47 addons-313496 kubelet[1210]: E1014 13:46:47.593729    1210 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913607593268739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:46:47 addons-313496 kubelet[1210]: E1014 13:46:47.593801    1210 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728913607593268739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:596189,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [65a4291d6c524d1dde5edcb65a98eed8b24ee7acf960c9a24d17f36e05ce41e4] <==
	I1014 13:39:51.094784       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1014 13:39:51.177212       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1014 13:39:51.177285       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1014 13:39:51.425162       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1014 13:39:51.426958       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1cbdc0c7-ddb7-4724-aa78-342ddce41743", APIVersion:"v1", ResourceVersion:"705", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-313496_f7441b78-eea4-419b-b255-37d1b82027a5 became leader
	I1014 13:39:51.427005       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-313496_f7441b78-eea4-419b-b255-37d1b82027a5!
	I1014 13:39:51.533856       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-313496_f7441b78-eea4-419b-b255-37d1b82027a5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-313496 -n addons-313496
helpers_test.go:261: (dbg) Run:  kubectl --context addons-313496 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:988: (dbg) Run:  out/minikube-linux-amd64 -p addons-313496 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (306.35s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.21s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-313496
addons_test.go:170: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-313496: exit status 82 (2m0.459654325s)

                                                
                                                
-- stdout --
	* Stopping node "addons-313496"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:172: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-313496" : exit status 82
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-313496
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-313496: exit status 11 (21.466250221s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.177:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-313496" : exit status 11
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-313496
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-313496: exit status 11 (6.143020703s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.177:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-313496" : exit status 11
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-313496
addons_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-313496: exit status 11 (6.144031288s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.177:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:185: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-313496" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (3.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 image rm kicbase/echo-server:functional-917108 --alsologtostderr
functional_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p functional-917108 image rm kicbase/echo-server:functional-917108 --alsologtostderr: (2.761290321s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 image ls
functional_test.go:403: expected "kicbase/echo-server:functional-917108" to be removed from minikube but still exists
--- FAIL: TestFunctional/parallel/ImageCommands/ImageRemove (3.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-917108 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (1.520918572s)
functional_test.go:411: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1014 13:53:59.493767   24080 out.go:345] Setting OutFile to fd 1 ...
	I1014 13:53:59.493943   24080 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:53:59.493990   24080 out.go:358] Setting ErrFile to fd 2...
	I1014 13:53:59.493999   24080 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:53:59.494270   24080 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
	I1014 13:53:59.494883   24080 config.go:182] Loaded profile config "functional-917108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:53:59.494978   24080 config.go:182] Loaded profile config "functional-917108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:53:59.495344   24080 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:53:59.495393   24080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:53:59.510315   24080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42745
	I1014 13:53:59.510848   24080 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:53:59.511478   24080 main.go:141] libmachine: Using API Version  1
	I1014 13:53:59.511507   24080 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:53:59.511868   24080 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:53:59.512058   24080 main.go:141] libmachine: (functional-917108) Calling .GetState
	I1014 13:53:59.514022   24080 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:53:59.514067   24080 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:53:59.528932   24080 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46261
	I1014 13:53:59.529407   24080 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:53:59.529956   24080 main.go:141] libmachine: Using API Version  1
	I1014 13:53:59.529990   24080 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:53:59.530284   24080 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:53:59.530430   24080 main.go:141] libmachine: (functional-917108) Calling .DriverName
	I1014 13:53:59.530658   24080 ssh_runner.go:195] Run: systemctl --version
	I1014 13:53:59.530680   24080 main.go:141] libmachine: (functional-917108) Calling .GetSSHHostname
	I1014 13:53:59.533369   24080 main.go:141] libmachine: (functional-917108) DBG | domain functional-917108 has defined MAC address 52:54:00:8b:bc:da in network mk-functional-917108
	I1014 13:53:59.533711   24080 main.go:141] libmachine: (functional-917108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:bc:da", ip: ""} in network mk-functional-917108: {Iface:virbr1 ExpiryTime:2024-10-14 14:50:40 +0000 UTC Type:0 Mac:52:54:00:8b:bc:da Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:functional-917108 Clientid:01:52:54:00:8b:bc:da}
	I1014 13:53:59.533818   24080 main.go:141] libmachine: (functional-917108) DBG | domain functional-917108 has defined IP address 192.168.39.149 and MAC address 52:54:00:8b:bc:da in network mk-functional-917108
	I1014 13:53:59.534035   24080 main.go:141] libmachine: (functional-917108) Calling .GetSSHPort
	I1014 13:53:59.534167   24080 main.go:141] libmachine: (functional-917108) Calling .GetSSHKeyPath
	I1014 13:53:59.534288   24080 main.go:141] libmachine: (functional-917108) Calling .GetSSHUsername
	I1014 13:53:59.534473   24080 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/functional-917108/id_rsa Username:docker}
	I1014 13:53:59.645804   24080 cache_images.go:289] Loading image from: /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar
	I1014 13:53:59.645920   24080 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/echo-server-save.tar
	I1014 13:53:59.655571   24080 ssh_runner.go:362] scp /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --> /var/lib/minikube/images/echo-server-save.tar (4950016 bytes)
	I1014 13:53:59.973086   24080 crio.go:275] Loading image: /var/lib/minikube/images/echo-server-save.tar
	I1014 13:53:59.973144   24080 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/echo-server-save.tar
	W1014 13:54:00.951008   24080 cache_images.go:253] Failed to load cached images for "functional-917108": loading images: CRI-O load /var/lib/minikube/images/echo-server-save.tar: crio load image: sudo podman load -i /var/lib/minikube/images/echo-server-save.tar: Process exited with status 125
	stdout:
	
	stderr:
	Getting image source signatures
	Copying blob sha256:385288f36387f526d4826ab7d5cf1ab0e58bb5684a8257e8d19d9da3773b85da
	Copying config sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
	Writing manifest to image destination
	Storing signatures
	Error: payload does not match any of the supported image formats (oci, oci-archive, dir, docker-archive)
	I1014 13:54:00.951044   24080 cache_images.go:265] failed pushing to: functional-917108
	I1014 13:54:00.951072   24080 main.go:141] libmachine: Making call to close driver server
	I1014 13:54:00.951079   24080 main.go:141] libmachine: (functional-917108) Calling .Close
	I1014 13:54:00.951318   24080 main.go:141] libmachine: (functional-917108) DBG | Closing plugin on server side
	I1014 13:54:00.951319   24080 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:54:00.951348   24080 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:54:00.951361   24080 main.go:141] libmachine: Making call to close driver server
	I1014 13:54:00.951369   24080 main.go:141] libmachine: (functional-917108) Calling .Close
	I1014 13:54:00.951716   24080 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:54:00.951728   24080 main.go:141] libmachine: (functional-917108) DBG | Closing plugin on server side
	I1014 13:54:00.951740   24080 main.go:141] libmachine: Making call to close connection to plugin binary

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-917108
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 image save --daemon kicbase/echo-server:functional-917108 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-917108
functional_test.go:432: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-917108: exit status 1 (18.230946ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-917108

                                                
                                                
** /stderr **
functional_test.go:434: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-917108

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 node stop m02 -v=7 --alsologtostderr
E1014 13:58:57.490763   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/functional-917108/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:59:17.972696   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/functional-917108/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:59:58.934281   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/functional-917108/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-450021 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.478161514s)

                                                
                                                
-- stdout --
	* Stopping node "ha-450021-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 13:58:53.782593   29338 out.go:345] Setting OutFile to fd 1 ...
	I1014 13:58:53.783065   29338 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:58:53.783083   29338 out.go:358] Setting ErrFile to fd 2...
	I1014 13:58:53.783092   29338 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:58:53.783424   29338 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
	I1014 13:58:53.783736   29338 mustload.go:65] Loading cluster: ha-450021
	I1014 13:58:53.784211   29338 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:58:53.784229   29338 stop.go:39] StopHost: ha-450021-m02
	I1014 13:58:53.784584   29338 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:58:53.784623   29338 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:58:53.799764   29338 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45799
	I1014 13:58:53.800312   29338 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:58:53.800922   29338 main.go:141] libmachine: Using API Version  1
	I1014 13:58:53.800938   29338 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:58:53.801332   29338 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:58:53.803873   29338 out.go:177] * Stopping node "ha-450021-m02"  ...
	I1014 13:58:53.805227   29338 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1014 13:58:53.805252   29338 main.go:141] libmachine: (ha-450021-m02) Calling .DriverName
	I1014 13:58:53.805495   29338 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1014 13:58:53.805522   29338 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	I1014 13:58:53.808283   29338 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:58:53.808653   29338 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:58:53.808684   29338 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:58:53.808805   29338 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHPort
	I1014 13:58:53.808958   29338 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:58:53.809139   29338 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHUsername
	I1014 13:58:53.809411   29338 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/id_rsa Username:docker}
	I1014 13:58:53.904060   29338 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1014 13:58:53.961290   29338 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1014 13:58:54.015462   29338 main.go:141] libmachine: Stopping "ha-450021-m02"...
	I1014 13:58:54.015485   29338 main.go:141] libmachine: (ha-450021-m02) Calling .GetState
	I1014 13:58:54.016855   29338 main.go:141] libmachine: (ha-450021-m02) Calling .Stop
	I1014 13:58:54.019946   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 0/120
	I1014 13:58:55.021449   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 1/120
	I1014 13:58:56.022704   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 2/120
	I1014 13:58:57.024093   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 3/120
	I1014 13:58:58.025362   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 4/120
	I1014 13:58:59.026630   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 5/120
	I1014 13:59:00.027954   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 6/120
	I1014 13:59:01.029306   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 7/120
	I1014 13:59:02.030845   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 8/120
	I1014 13:59:03.033258   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 9/120
	I1014 13:59:04.035548   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 10/120
	I1014 13:59:05.037174   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 11/120
	I1014 13:59:06.038678   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 12/120
	I1014 13:59:07.040075   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 13/120
	I1014 13:59:08.041269   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 14/120
	I1014 13:59:09.043099   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 15/120
	I1014 13:59:10.045026   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 16/120
	I1014 13:59:11.046532   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 17/120
	I1014 13:59:12.048437   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 18/120
	I1014 13:59:13.050254   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 19/120
	I1014 13:59:14.051895   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 20/120
	I1014 13:59:15.053349   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 21/120
	I1014 13:59:16.054657   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 22/120
	I1014 13:59:17.056567   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 23/120
	I1014 13:59:18.057765   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 24/120
	I1014 13:59:19.059631   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 25/120
	I1014 13:59:20.061096   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 26/120
	I1014 13:59:21.062186   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 27/120
	I1014 13:59:22.063707   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 28/120
	I1014 13:59:23.065839   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 29/120
	I1014 13:59:24.068148   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 30/120
	I1014 13:59:25.069432   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 31/120
	I1014 13:59:26.070755   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 32/120
	I1014 13:59:27.072238   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 33/120
	I1014 13:59:28.073749   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 34/120
	I1014 13:59:29.075776   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 35/120
	I1014 13:59:30.077226   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 36/120
	I1014 13:59:31.078626   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 37/120
	I1014 13:59:32.080332   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 38/120
	I1014 13:59:33.081880   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 39/120
	I1014 13:59:34.083538   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 40/120
	I1014 13:59:35.085009   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 41/120
	I1014 13:59:36.087160   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 42/120
	I1014 13:59:37.089182   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 43/120
	I1014 13:59:38.090682   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 44/120
	I1014 13:59:39.093021   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 45/120
	I1014 13:59:40.094541   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 46/120
	I1014 13:59:41.096578   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 47/120
	I1014 13:59:42.099157   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 48/120
	I1014 13:59:43.101074   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 49/120
	I1014 13:59:44.103197   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 50/120
	I1014 13:59:45.105365   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 51/120
	I1014 13:59:46.107497   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 52/120
	I1014 13:59:47.109486   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 53/120
	I1014 13:59:48.111360   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 54/120
	I1014 13:59:49.112998   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 55/120
	I1014 13:59:50.114537   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 56/120
	I1014 13:59:51.116011   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 57/120
	I1014 13:59:52.117458   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 58/120
	I1014 13:59:53.118908   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 59/120
	I1014 13:59:54.120819   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 60/120
	I1014 13:59:55.122216   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 61/120
	I1014 13:59:56.123445   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 62/120
	I1014 13:59:57.124884   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 63/120
	I1014 13:59:58.126181   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 64/120
	I1014 13:59:59.128276   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 65/120
	I1014 14:00:00.129693   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 66/120
	I1014 14:00:01.131223   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 67/120
	I1014 14:00:02.132510   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 68/120
	I1014 14:00:03.133751   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 69/120
	I1014 14:00:04.135558   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 70/120
	I1014 14:00:05.136809   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 71/120
	I1014 14:00:06.138499   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 72/120
	I1014 14:00:07.139972   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 73/120
	I1014 14:00:08.141395   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 74/120
	I1014 14:00:09.143334   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 75/120
	I1014 14:00:10.144563   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 76/120
	I1014 14:00:11.146456   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 77/120
	I1014 14:00:12.148145   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 78/120
	I1014 14:00:13.149948   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 79/120
	I1014 14:00:14.151947   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 80/120
	I1014 14:00:15.153231   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 81/120
	I1014 14:00:16.154651   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 82/120
	I1014 14:00:17.156133   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 83/120
	I1014 14:00:18.157583   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 84/120
	I1014 14:00:19.159187   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 85/120
	I1014 14:00:20.160985   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 86/120
	I1014 14:00:21.162132   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 87/120
	I1014 14:00:22.163550   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 88/120
	I1014 14:00:23.165182   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 89/120
	I1014 14:00:24.166986   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 90/120
	I1014 14:00:25.169107   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 91/120
	I1014 14:00:26.170772   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 92/120
	I1014 14:00:27.173298   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 93/120
	I1014 14:00:28.174923   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 94/120
	I1014 14:00:29.176775   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 95/120
	I1014 14:00:30.177909   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 96/120
	I1014 14:00:31.179340   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 97/120
	I1014 14:00:32.180637   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 98/120
	I1014 14:00:33.182388   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 99/120
	I1014 14:00:34.184515   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 100/120
	I1014 14:00:35.185972   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 101/120
	I1014 14:00:36.187882   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 102/120
	I1014 14:00:37.189388   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 103/120
	I1014 14:00:38.190977   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 104/120
	I1014 14:00:39.192763   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 105/120
	I1014 14:00:40.194260   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 106/120
	I1014 14:00:41.195414   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 107/120
	I1014 14:00:42.197076   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 108/120
	I1014 14:00:43.198472   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 109/120
	I1014 14:00:44.200543   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 110/120
	I1014 14:00:45.201785   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 111/120
	I1014 14:00:46.203241   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 112/120
	I1014 14:00:47.205077   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 113/120
	I1014 14:00:48.206412   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 114/120
	I1014 14:00:49.208261   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 115/120
	I1014 14:00:50.209517   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 116/120
	I1014 14:00:51.210798   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 117/120
	I1014 14:00:52.212137   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 118/120
	I1014 14:00:53.214219   29338 main.go:141] libmachine: (ha-450021-m02) Waiting for machine to stop 119/120
	I1014 14:00:54.214890   29338 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1014 14:00:54.215053   29338 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-450021 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 status -v=7 --alsologtostderr
E1014 14:01:06.400980   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:371: (dbg) Done: out/minikube-linux-amd64 -p ha-450021 status -v=7 --alsologtostderr: (18.79977431s)
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-450021 status -v=7 --alsologtostderr": 
ha_test.go:380: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-450021 status -v=7 --alsologtostderr": 
ha_test.go:383: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-450021 status -v=7 --alsologtostderr": 
ha_test.go:386: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-450021 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-450021 -n ha-450021
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-450021 logs -n 25: (1.431820041s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-450021 cp ha-450021-m03:/home/docker/cp-test.txt                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3029314565/001/cp-test_ha-450021-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-450021 cp ha-450021-m03:/home/docker/cp-test.txt                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021:/home/docker/cp-test_ha-450021-m03_ha-450021.txt                       |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n ha-450021 sudo cat                                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /home/docker/cp-test_ha-450021-m03_ha-450021.txt                                 |           |         |         |                     |                     |
	| cp      | ha-450021 cp ha-450021-m03:/home/docker/cp-test.txt                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m02:/home/docker/cp-test_ha-450021-m03_ha-450021-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n ha-450021-m02 sudo cat                                          | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /home/docker/cp-test_ha-450021-m03_ha-450021-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-450021 cp ha-450021-m03:/home/docker/cp-test.txt                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04:/home/docker/cp-test_ha-450021-m03_ha-450021-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n ha-450021-m04 sudo cat                                          | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /home/docker/cp-test_ha-450021-m03_ha-450021-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-450021 cp testdata/cp-test.txt                                                | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-450021 cp ha-450021-m04:/home/docker/cp-test.txt                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3029314565/001/cp-test_ha-450021-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-450021 cp ha-450021-m04:/home/docker/cp-test.txt                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021:/home/docker/cp-test_ha-450021-m04_ha-450021.txt                       |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n ha-450021 sudo cat                                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /home/docker/cp-test_ha-450021-m04_ha-450021.txt                                 |           |         |         |                     |                     |
	| cp      | ha-450021 cp ha-450021-m04:/home/docker/cp-test.txt                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m02:/home/docker/cp-test_ha-450021-m04_ha-450021-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n ha-450021-m02 sudo cat                                          | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /home/docker/cp-test_ha-450021-m04_ha-450021-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-450021 cp ha-450021-m04:/home/docker/cp-test.txt                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m03:/home/docker/cp-test_ha-450021-m04_ha-450021-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n ha-450021-m03 sudo cat                                          | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /home/docker/cp-test_ha-450021-m04_ha-450021-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-450021 node stop m02 -v=7                                                     | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 13:54:19
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 13:54:19.812271   25306 out.go:345] Setting OutFile to fd 1 ...
	I1014 13:54:19.812610   25306 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:54:19.812625   25306 out.go:358] Setting ErrFile to fd 2...
	I1014 13:54:19.812632   25306 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:54:19.813049   25306 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
	I1014 13:54:19.813610   25306 out.go:352] Setting JSON to false
	I1014 13:54:19.814483   25306 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2210,"bootTime":1728911850,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 13:54:19.814571   25306 start.go:139] virtualization: kvm guest
	I1014 13:54:19.816884   25306 out.go:177] * [ha-450021] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1014 13:54:19.818710   25306 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 13:54:19.818708   25306 notify.go:220] Checking for updates...
	I1014 13:54:19.821425   25306 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 13:54:19.822777   25306 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 13:54:19.824007   25306 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 13:54:19.825232   25306 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 13:54:19.826443   25306 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 13:54:19.827738   25306 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 13:54:19.861394   25306 out.go:177] * Using the kvm2 driver based on user configuration
	I1014 13:54:19.862707   25306 start.go:297] selected driver: kvm2
	I1014 13:54:19.862720   25306 start.go:901] validating driver "kvm2" against <nil>
	I1014 13:54:19.862734   25306 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 13:54:19.863393   25306 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 13:54:19.863486   25306 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19790-7836/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 13:54:19.878143   25306 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1014 13:54:19.878185   25306 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 13:54:19.878407   25306 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 13:54:19.878437   25306 cni.go:84] Creating CNI manager for ""
	I1014 13:54:19.878478   25306 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1014 13:54:19.878486   25306 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 13:54:19.878530   25306 start.go:340] cluster config:
	{Name:ha-450021 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-450021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1014 13:54:19.878657   25306 iso.go:125] acquiring lock: {Name:mk2e2e780b05ead4007a93e6b56d28c6081926bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 13:54:19.881226   25306 out.go:177] * Starting "ha-450021" primary control-plane node in "ha-450021" cluster
	I1014 13:54:19.882326   25306 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 13:54:19.882357   25306 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1014 13:54:19.882366   25306 cache.go:56] Caching tarball of preloaded images
	I1014 13:54:19.882441   25306 preload.go:172] Found /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 13:54:19.882451   25306 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1014 13:54:19.882789   25306 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/config.json ...
	I1014 13:54:19.882811   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/config.json: {Name:mk7e7a81dd8e8c0d913c7421cc0d458f1e8a36b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:54:19.882936   25306 start.go:360] acquireMachinesLock for ha-450021: {Name:mk0b928e3a7c606d1470b2064477a22c5e59969d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 13:54:19.882963   25306 start.go:364] duration metric: took 16.489µs to acquireMachinesLock for "ha-450021"
	I1014 13:54:19.882982   25306 start.go:93] Provisioning new machine with config: &{Name:ha-450021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-450021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 13:54:19.883029   25306 start.go:125] createHost starting for "" (driver="kvm2")
	I1014 13:54:19.884643   25306 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 13:54:19.884761   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:54:19.884802   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:54:19.899595   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35743
	I1014 13:54:19.900085   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:54:19.900603   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:54:19.900622   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:54:19.900928   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:54:19.901089   25306 main.go:141] libmachine: (ha-450021) Calling .GetMachineName
	I1014 13:54:19.901224   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:54:19.901350   25306 start.go:159] libmachine.API.Create for "ha-450021" (driver="kvm2")
	I1014 13:54:19.901382   25306 client.go:168] LocalClient.Create starting
	I1014 13:54:19.901414   25306 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem
	I1014 13:54:19.901441   25306 main.go:141] libmachine: Decoding PEM data...
	I1014 13:54:19.901454   25306 main.go:141] libmachine: Parsing certificate...
	I1014 13:54:19.901498   25306 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem
	I1014 13:54:19.901515   25306 main.go:141] libmachine: Decoding PEM data...
	I1014 13:54:19.901544   25306 main.go:141] libmachine: Parsing certificate...
	I1014 13:54:19.901570   25306 main.go:141] libmachine: Running pre-create checks...
	I1014 13:54:19.901582   25306 main.go:141] libmachine: (ha-450021) Calling .PreCreateCheck
	I1014 13:54:19.901916   25306 main.go:141] libmachine: (ha-450021) Calling .GetConfigRaw
	I1014 13:54:19.902252   25306 main.go:141] libmachine: Creating machine...
	I1014 13:54:19.902264   25306 main.go:141] libmachine: (ha-450021) Calling .Create
	I1014 13:54:19.902384   25306 main.go:141] libmachine: (ha-450021) Creating KVM machine...
	I1014 13:54:19.903685   25306 main.go:141] libmachine: (ha-450021) DBG | found existing default KVM network
	I1014 13:54:19.904369   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:19.904236   25330 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211e0}
	I1014 13:54:19.904404   25306 main.go:141] libmachine: (ha-450021) DBG | created network xml: 
	I1014 13:54:19.904424   25306 main.go:141] libmachine: (ha-450021) DBG | <network>
	I1014 13:54:19.904433   25306 main.go:141] libmachine: (ha-450021) DBG |   <name>mk-ha-450021</name>
	I1014 13:54:19.904439   25306 main.go:141] libmachine: (ha-450021) DBG |   <dns enable='no'/>
	I1014 13:54:19.904447   25306 main.go:141] libmachine: (ha-450021) DBG |   
	I1014 13:54:19.904459   25306 main.go:141] libmachine: (ha-450021) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1014 13:54:19.904466   25306 main.go:141] libmachine: (ha-450021) DBG |     <dhcp>
	I1014 13:54:19.904474   25306 main.go:141] libmachine: (ha-450021) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1014 13:54:19.904486   25306 main.go:141] libmachine: (ha-450021) DBG |     </dhcp>
	I1014 13:54:19.904496   25306 main.go:141] libmachine: (ha-450021) DBG |   </ip>
	I1014 13:54:19.904507   25306 main.go:141] libmachine: (ha-450021) DBG |   
	I1014 13:54:19.904513   25306 main.go:141] libmachine: (ha-450021) DBG | </network>
	I1014 13:54:19.904522   25306 main.go:141] libmachine: (ha-450021) DBG | 
	I1014 13:54:19.910040   25306 main.go:141] libmachine: (ha-450021) DBG | trying to create private KVM network mk-ha-450021 192.168.39.0/24...
	I1014 13:54:19.971833   25306 main.go:141] libmachine: (ha-450021) DBG | private KVM network mk-ha-450021 192.168.39.0/24 created
	I1014 13:54:19.971862   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:19.971805   25330 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 13:54:19.971874   25306 main.go:141] libmachine: (ha-450021) Setting up store path in /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021 ...
	I1014 13:54:19.971891   25306 main.go:141] libmachine: (ha-450021) Building disk image from file:///home/jenkins/minikube-integration/19790-7836/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1014 13:54:19.971967   25306 main.go:141] libmachine: (ha-450021) Downloading /home/jenkins/minikube-integration/19790-7836/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19790-7836/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1014 13:54:20.214152   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:20.214048   25330 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa...
	I1014 13:54:20.270347   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:20.270208   25330 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/ha-450021.rawdisk...
	I1014 13:54:20.270384   25306 main.go:141] libmachine: (ha-450021) DBG | Writing magic tar header
	I1014 13:54:20.270399   25306 main.go:141] libmachine: (ha-450021) DBG | Writing SSH key tar header
	I1014 13:54:20.270411   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:20.270359   25330 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021 ...
	I1014 13:54:20.270469   25306 main.go:141] libmachine: (ha-450021) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021
	I1014 13:54:20.270577   25306 main.go:141] libmachine: (ha-450021) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021 (perms=drwx------)
	I1014 13:54:20.270629   25306 main.go:141] libmachine: (ha-450021) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube/machines (perms=drwxr-xr-x)
	I1014 13:54:20.270649   25306 main.go:141] libmachine: (ha-450021) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube/machines
	I1014 13:54:20.270663   25306 main.go:141] libmachine: (ha-450021) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 13:54:20.270676   25306 main.go:141] libmachine: (ha-450021) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube (perms=drwxr-xr-x)
	I1014 13:54:20.270690   25306 main.go:141] libmachine: (ha-450021) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836 (perms=drwxrwxr-x)
	I1014 13:54:20.270697   25306 main.go:141] libmachine: (ha-450021) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1014 13:54:20.270707   25306 main.go:141] libmachine: (ha-450021) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1014 13:54:20.270716   25306 main.go:141] libmachine: (ha-450021) Creating domain...
	I1014 13:54:20.270725   25306 main.go:141] libmachine: (ha-450021) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836
	I1014 13:54:20.270732   25306 main.go:141] libmachine: (ha-450021) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1014 13:54:20.270758   25306 main.go:141] libmachine: (ha-450021) DBG | Checking permissions on dir: /home/jenkins
	I1014 13:54:20.270778   25306 main.go:141] libmachine: (ha-450021) DBG | Checking permissions on dir: /home
	I1014 13:54:20.270791   25306 main.go:141] libmachine: (ha-450021) DBG | Skipping /home - not owner
	I1014 13:54:20.271873   25306 main.go:141] libmachine: (ha-450021) define libvirt domain using xml: 
	I1014 13:54:20.271895   25306 main.go:141] libmachine: (ha-450021) <domain type='kvm'>
	I1014 13:54:20.271904   25306 main.go:141] libmachine: (ha-450021)   <name>ha-450021</name>
	I1014 13:54:20.271909   25306 main.go:141] libmachine: (ha-450021)   <memory unit='MiB'>2200</memory>
	I1014 13:54:20.271915   25306 main.go:141] libmachine: (ha-450021)   <vcpu>2</vcpu>
	I1014 13:54:20.271922   25306 main.go:141] libmachine: (ha-450021)   <features>
	I1014 13:54:20.271942   25306 main.go:141] libmachine: (ha-450021)     <acpi/>
	I1014 13:54:20.271950   25306 main.go:141] libmachine: (ha-450021)     <apic/>
	I1014 13:54:20.271956   25306 main.go:141] libmachine: (ha-450021)     <pae/>
	I1014 13:54:20.271997   25306 main.go:141] libmachine: (ha-450021)     
	I1014 13:54:20.272026   25306 main.go:141] libmachine: (ha-450021)   </features>
	I1014 13:54:20.272048   25306 main.go:141] libmachine: (ha-450021)   <cpu mode='host-passthrough'>
	I1014 13:54:20.272058   25306 main.go:141] libmachine: (ha-450021)   
	I1014 13:54:20.272070   25306 main.go:141] libmachine: (ha-450021)   </cpu>
	I1014 13:54:20.272081   25306 main.go:141] libmachine: (ha-450021)   <os>
	I1014 13:54:20.272089   25306 main.go:141] libmachine: (ha-450021)     <type>hvm</type>
	I1014 13:54:20.272100   25306 main.go:141] libmachine: (ha-450021)     <boot dev='cdrom'/>
	I1014 13:54:20.272132   25306 main.go:141] libmachine: (ha-450021)     <boot dev='hd'/>
	I1014 13:54:20.272144   25306 main.go:141] libmachine: (ha-450021)     <bootmenu enable='no'/>
	I1014 13:54:20.272150   25306 main.go:141] libmachine: (ha-450021)   </os>
	I1014 13:54:20.272158   25306 main.go:141] libmachine: (ha-450021)   <devices>
	I1014 13:54:20.272173   25306 main.go:141] libmachine: (ha-450021)     <disk type='file' device='cdrom'>
	I1014 13:54:20.272188   25306 main.go:141] libmachine: (ha-450021)       <source file='/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/boot2docker.iso'/>
	I1014 13:54:20.272198   25306 main.go:141] libmachine: (ha-450021)       <target dev='hdc' bus='scsi'/>
	I1014 13:54:20.272208   25306 main.go:141] libmachine: (ha-450021)       <readonly/>
	I1014 13:54:20.272217   25306 main.go:141] libmachine: (ha-450021)     </disk>
	I1014 13:54:20.272224   25306 main.go:141] libmachine: (ha-450021)     <disk type='file' device='disk'>
	I1014 13:54:20.272233   25306 main.go:141] libmachine: (ha-450021)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1014 13:54:20.272252   25306 main.go:141] libmachine: (ha-450021)       <source file='/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/ha-450021.rawdisk'/>
	I1014 13:54:20.272267   25306 main.go:141] libmachine: (ha-450021)       <target dev='hda' bus='virtio'/>
	I1014 13:54:20.272277   25306 main.go:141] libmachine: (ha-450021)     </disk>
	I1014 13:54:20.272287   25306 main.go:141] libmachine: (ha-450021)     <interface type='network'>
	I1014 13:54:20.272303   25306 main.go:141] libmachine: (ha-450021)       <source network='mk-ha-450021'/>
	I1014 13:54:20.272315   25306 main.go:141] libmachine: (ha-450021)       <model type='virtio'/>
	I1014 13:54:20.272323   25306 main.go:141] libmachine: (ha-450021)     </interface>
	I1014 13:54:20.272332   25306 main.go:141] libmachine: (ha-450021)     <interface type='network'>
	I1014 13:54:20.272356   25306 main.go:141] libmachine: (ha-450021)       <source network='default'/>
	I1014 13:54:20.272378   25306 main.go:141] libmachine: (ha-450021)       <model type='virtio'/>
	I1014 13:54:20.272390   25306 main.go:141] libmachine: (ha-450021)     </interface>
	I1014 13:54:20.272397   25306 main.go:141] libmachine: (ha-450021)     <serial type='pty'>
	I1014 13:54:20.272402   25306 main.go:141] libmachine: (ha-450021)       <target port='0'/>
	I1014 13:54:20.272409   25306 main.go:141] libmachine: (ha-450021)     </serial>
	I1014 13:54:20.272414   25306 main.go:141] libmachine: (ha-450021)     <console type='pty'>
	I1014 13:54:20.272421   25306 main.go:141] libmachine: (ha-450021)       <target type='serial' port='0'/>
	I1014 13:54:20.272426   25306 main.go:141] libmachine: (ha-450021)     </console>
	I1014 13:54:20.272433   25306 main.go:141] libmachine: (ha-450021)     <rng model='virtio'>
	I1014 13:54:20.272442   25306 main.go:141] libmachine: (ha-450021)       <backend model='random'>/dev/random</backend>
	I1014 13:54:20.272449   25306 main.go:141] libmachine: (ha-450021)     </rng>
	I1014 13:54:20.272464   25306 main.go:141] libmachine: (ha-450021)     
	I1014 13:54:20.272479   25306 main.go:141] libmachine: (ha-450021)     
	I1014 13:54:20.272490   25306 main.go:141] libmachine: (ha-450021)   </devices>
	I1014 13:54:20.272499   25306 main.go:141] libmachine: (ha-450021) </domain>
	I1014 13:54:20.272508   25306 main.go:141] libmachine: (ha-450021) 
	I1014 13:54:20.276743   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:57:d6:54 in network default
	I1014 13:54:20.277233   25306 main.go:141] libmachine: (ha-450021) Ensuring networks are active...
	I1014 13:54:20.277256   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:20.277849   25306 main.go:141] libmachine: (ha-450021) Ensuring network default is active
	I1014 13:54:20.278100   25306 main.go:141] libmachine: (ha-450021) Ensuring network mk-ha-450021 is active
	I1014 13:54:20.278557   25306 main.go:141] libmachine: (ha-450021) Getting domain xml...
	I1014 13:54:20.279179   25306 main.go:141] libmachine: (ha-450021) Creating domain...
	I1014 13:54:21.462335   25306 main.go:141] libmachine: (ha-450021) Waiting to get IP...
	I1014 13:54:21.463069   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:21.463429   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:21.463469   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:21.463416   25330 retry.go:31] will retry after 252.896893ms: waiting for machine to come up
	I1014 13:54:21.717838   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:21.718276   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:21.718307   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:21.718253   25330 retry.go:31] will retry after 323.417298ms: waiting for machine to come up
	I1014 13:54:22.043653   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:22.044089   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:22.044113   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:22.044049   25330 retry.go:31] will retry after 429.247039ms: waiting for machine to come up
	I1014 13:54:22.474550   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:22.475007   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:22.475032   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:22.474972   25330 retry.go:31] will retry after 584.602082ms: waiting for machine to come up
	I1014 13:54:23.060636   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:23.061070   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:23.061096   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:23.061025   25330 retry.go:31] will retry after 757.618183ms: waiting for machine to come up
	I1014 13:54:23.819839   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:23.820349   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:23.820388   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:23.820305   25330 retry.go:31] will retry after 770.363721ms: waiting for machine to come up
	I1014 13:54:24.592151   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:24.592528   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:24.592563   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:24.592475   25330 retry.go:31] will retry after 746.543201ms: waiting for machine to come up
	I1014 13:54:25.340318   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:25.340826   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:25.340855   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:25.340782   25330 retry.go:31] will retry after 1.064448623s: waiting for machine to come up
	I1014 13:54:26.407039   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:26.407396   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:26.407443   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:26.407341   25330 retry.go:31] will retry after 1.702825811s: waiting for machine to come up
	I1014 13:54:28.112412   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:28.112812   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:28.112833   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:28.112771   25330 retry.go:31] will retry after 2.323768802s: waiting for machine to come up
	I1014 13:54:30.438077   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:30.438423   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:30.438463   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:30.438389   25330 retry.go:31] will retry after 2.882558658s: waiting for machine to come up
	I1014 13:54:33.324506   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:33.324987   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:33.325011   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:33.324949   25330 retry.go:31] will retry after 3.489582892s: waiting for machine to come up
	I1014 13:54:36.817112   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:36.817504   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:36.817523   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:36.817476   25330 retry.go:31] will retry after 4.118141928s: waiting for machine to come up
	I1014 13:54:40.937526   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:40.938020   25306 main.go:141] libmachine: (ha-450021) Found IP for machine: 192.168.39.176
	I1014 13:54:40.938039   25306 main.go:141] libmachine: (ha-450021) Reserving static IP address...
	I1014 13:54:40.938070   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has current primary IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:40.938454   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find host DHCP lease matching {name: "ha-450021", mac: "52:54:00:a1:20:5f", ip: "192.168.39.176"} in network mk-ha-450021
	I1014 13:54:41.006419   25306 main.go:141] libmachine: (ha-450021) DBG | Getting to WaitForSSH function...
	I1014 13:54:41.006450   25306 main.go:141] libmachine: (ha-450021) Reserved static IP address: 192.168.39.176
	I1014 13:54:41.006463   25306 main.go:141] libmachine: (ha-450021) Waiting for SSH to be available...
	I1014 13:54:41.008964   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.009322   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:41.009350   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.009443   25306 main.go:141] libmachine: (ha-450021) DBG | Using SSH client type: external
	I1014 13:54:41.009470   25306 main.go:141] libmachine: (ha-450021) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa (-rw-------)
	I1014 13:54:41.009582   25306 main.go:141] libmachine: (ha-450021) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.176 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 13:54:41.009598   25306 main.go:141] libmachine: (ha-450021) DBG | About to run SSH command:
	I1014 13:54:41.009610   25306 main.go:141] libmachine: (ha-450021) DBG | exit 0
	I1014 13:54:41.138539   25306 main.go:141] libmachine: (ha-450021) DBG | SSH cmd err, output: <nil>: 
	I1014 13:54:41.138806   25306 main.go:141] libmachine: (ha-450021) KVM machine creation complete!
	I1014 13:54:41.139099   25306 main.go:141] libmachine: (ha-450021) Calling .GetConfigRaw
	I1014 13:54:41.139669   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:54:41.139826   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:54:41.139970   25306 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1014 13:54:41.139983   25306 main.go:141] libmachine: (ha-450021) Calling .GetState
	I1014 13:54:41.141211   25306 main.go:141] libmachine: Detecting operating system of created instance...
	I1014 13:54:41.141221   25306 main.go:141] libmachine: Waiting for SSH to be available...
	I1014 13:54:41.141226   25306 main.go:141] libmachine: Getting to WaitForSSH function...
	I1014 13:54:41.141232   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:41.143400   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.143673   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:41.143693   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.143898   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:54:41.144069   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:41.144217   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:41.144390   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:54:41.144570   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:54:41.144741   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1014 13:54:41.144750   25306 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1014 13:54:41.257764   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 13:54:41.257787   25306 main.go:141] libmachine: Detecting the provisioner...
	I1014 13:54:41.257794   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:41.260355   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.260721   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:41.260755   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.260886   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:54:41.261058   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:41.261185   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:41.261349   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:54:41.261568   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:54:41.261770   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1014 13:54:41.261781   25306 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1014 13:54:41.387334   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1014 13:54:41.387407   25306 main.go:141] libmachine: found compatible host: buildroot
	I1014 13:54:41.387415   25306 main.go:141] libmachine: Provisioning with buildroot...
	I1014 13:54:41.387428   25306 main.go:141] libmachine: (ha-450021) Calling .GetMachineName
	I1014 13:54:41.387694   25306 buildroot.go:166] provisioning hostname "ha-450021"
	I1014 13:54:41.387742   25306 main.go:141] libmachine: (ha-450021) Calling .GetMachineName
	I1014 13:54:41.387887   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:41.390287   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.390677   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:41.390702   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.390836   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:54:41.391004   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:41.391122   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:41.391234   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:54:41.391358   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:54:41.391508   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1014 13:54:41.391518   25306 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-450021 && echo "ha-450021" | sudo tee /etc/hostname
	I1014 13:54:41.517186   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-450021
	
	I1014 13:54:41.517216   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:41.520093   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.520451   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:41.520480   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.520651   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:54:41.520827   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:41.520970   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:41.521077   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:54:41.521209   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:54:41.521391   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1014 13:54:41.521405   25306 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-450021' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-450021/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-450021' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 13:54:41.643685   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 13:54:41.643709   25306 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 13:54:41.643742   25306 buildroot.go:174] setting up certificates
	I1014 13:54:41.643754   25306 provision.go:84] configureAuth start
	I1014 13:54:41.643778   25306 main.go:141] libmachine: (ha-450021) Calling .GetMachineName
	I1014 13:54:41.644050   25306 main.go:141] libmachine: (ha-450021) Calling .GetIP
	I1014 13:54:41.646478   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.646878   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:41.646897   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.647059   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:41.648912   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.649213   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:41.649236   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.649373   25306 provision.go:143] copyHostCerts
	I1014 13:54:41.649402   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 13:54:41.649434   25306 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 13:54:41.649453   25306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 13:54:41.649515   25306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 13:54:41.649594   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 13:54:41.649617   25306 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 13:54:41.649623   25306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 13:54:41.649649   25306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 13:54:41.649688   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 13:54:41.649704   25306 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 13:54:41.649710   25306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 13:54:41.649730   25306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 13:54:41.649772   25306 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.ha-450021 san=[127.0.0.1 192.168.39.176 ha-450021 localhost minikube]
	I1014 13:54:41.997744   25306 provision.go:177] copyRemoteCerts
	I1014 13:54:41.997799   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 13:54:41.997817   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:42.000612   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.000903   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:42.000935   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.001075   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:54:42.001266   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:42.001429   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:54:42.001565   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 13:54:42.088827   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 13:54:42.088897   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 13:54:42.116095   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 13:54:42.116160   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 13:54:42.142757   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 13:54:42.142813   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1014 13:54:42.169537   25306 provision.go:87] duration metric: took 525.766906ms to configureAuth
	I1014 13:54:42.169566   25306 buildroot.go:189] setting minikube options for container-runtime
	I1014 13:54:42.169754   25306 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:54:42.169842   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:42.173229   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.174055   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:42.174080   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.174242   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:54:42.174429   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:42.174574   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:42.174715   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:54:42.174880   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:54:42.175029   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1014 13:54:42.175043   25306 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 13:54:42.406341   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 13:54:42.406376   25306 main.go:141] libmachine: Checking connection to Docker...
	I1014 13:54:42.406388   25306 main.go:141] libmachine: (ha-450021) Calling .GetURL
	I1014 13:54:42.407812   25306 main.go:141] libmachine: (ha-450021) DBG | Using libvirt version 6000000
	I1014 13:54:42.409824   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.410126   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:42.410157   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.410300   25306 main.go:141] libmachine: Docker is up and running!
	I1014 13:54:42.410319   25306 main.go:141] libmachine: Reticulating splines...
	I1014 13:54:42.410327   25306 client.go:171] duration metric: took 22.508934376s to LocalClient.Create
	I1014 13:54:42.410349   25306 start.go:167] duration metric: took 22.50900119s to libmachine.API.Create "ha-450021"
	I1014 13:54:42.410361   25306 start.go:293] postStartSetup for "ha-450021" (driver="kvm2")
	I1014 13:54:42.410370   25306 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 13:54:42.410386   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:54:42.410579   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 13:54:42.410619   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:42.412494   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.412776   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:42.412801   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.412917   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:54:42.413098   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:42.413204   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:54:42.413344   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 13:54:42.501187   25306 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 13:54:42.505548   25306 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 13:54:42.505573   25306 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 13:54:42.505640   25306 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 13:54:42.505739   25306 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 13:54:42.505751   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> /etc/ssl/certs/150232.pem
	I1014 13:54:42.505871   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 13:54:42.515100   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 13:54:42.540037   25306 start.go:296] duration metric: took 129.664961ms for postStartSetup
	I1014 13:54:42.540090   25306 main.go:141] libmachine: (ha-450021) Calling .GetConfigRaw
	I1014 13:54:42.540652   25306 main.go:141] libmachine: (ha-450021) Calling .GetIP
	I1014 13:54:42.543542   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.543870   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:42.543893   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.544115   25306 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/config.json ...
	I1014 13:54:42.544316   25306 start.go:128] duration metric: took 22.661278968s to createHost
	I1014 13:54:42.544340   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:42.546241   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.546584   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:42.546619   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.546735   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:54:42.546887   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:42.547016   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:42.547115   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:54:42.547241   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:54:42.547400   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1014 13:54:42.547410   25306 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 13:54:42.659258   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728914082.633821014
	
	I1014 13:54:42.659276   25306 fix.go:216] guest clock: 1728914082.633821014
	I1014 13:54:42.659283   25306 fix.go:229] Guest: 2024-10-14 13:54:42.633821014 +0000 UTC Remote: 2024-10-14 13:54:42.544328107 +0000 UTC m=+22.768041164 (delta=89.492907ms)
	I1014 13:54:42.659308   25306 fix.go:200] guest clock delta is within tolerance: 89.492907ms
	I1014 13:54:42.659315   25306 start.go:83] releasing machines lock for "ha-450021", held for 22.776339529s
	I1014 13:54:42.659340   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:54:42.659634   25306 main.go:141] libmachine: (ha-450021) Calling .GetIP
	I1014 13:54:42.662263   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.662566   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:42.662590   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.662762   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:54:42.663245   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:54:42.663382   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:54:42.663435   25306 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 13:54:42.663485   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:42.663584   25306 ssh_runner.go:195] Run: cat /version.json
	I1014 13:54:42.663609   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:42.665952   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.666140   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.666285   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:42.666310   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.666455   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:42.666478   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.666495   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:54:42.666715   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:54:42.666742   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:42.666851   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:54:42.666858   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:42.667031   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:54:42.667026   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 13:54:42.667128   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 13:54:42.747369   25306 ssh_runner.go:195] Run: systemctl --version
	I1014 13:54:42.781149   25306 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 13:54:42.939239   25306 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 13:54:42.945827   25306 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 13:54:42.945908   25306 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 13:54:42.961868   25306 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 13:54:42.961898   25306 start.go:495] detecting cgroup driver to use...
	I1014 13:54:42.961965   25306 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 13:54:42.979523   25306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 13:54:42.994309   25306 docker.go:217] disabling cri-docker service (if available) ...
	I1014 13:54:42.994364   25306 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 13:54:43.009231   25306 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 13:54:43.023792   25306 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 13:54:43.139525   25306 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 13:54:43.303272   25306 docker.go:233] disabling docker service ...
	I1014 13:54:43.303333   25306 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 13:54:43.318132   25306 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 13:54:43.331650   25306 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 13:54:43.447799   25306 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 13:54:43.574532   25306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 13:54:43.588882   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 13:54:43.606788   25306 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 13:54:43.606849   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:54:43.617065   25306 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 13:54:43.617138   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:54:43.627421   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:54:43.637692   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:54:43.648944   25306 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 13:54:43.659223   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:54:43.669296   25306 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:54:43.686887   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:54:43.697925   25306 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 13:54:43.707402   25306 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 13:54:43.707476   25306 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 13:54:43.720091   25306 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 13:54:43.729667   25306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:54:43.845781   25306 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 13:54:43.932782   25306 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 13:54:43.932868   25306 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 13:54:43.938172   25306 start.go:563] Will wait 60s for crictl version
	I1014 13:54:43.938228   25306 ssh_runner.go:195] Run: which crictl
	I1014 13:54:43.941774   25306 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 13:54:43.979317   25306 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 13:54:43.979415   25306 ssh_runner.go:195] Run: crio --version
	I1014 13:54:44.006952   25306 ssh_runner.go:195] Run: crio --version
	I1014 13:54:44.038472   25306 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1014 13:54:44.039762   25306 main.go:141] libmachine: (ha-450021) Calling .GetIP
	I1014 13:54:44.042304   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:44.042634   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:44.042661   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:44.042831   25306 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1014 13:54:44.046611   25306 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 13:54:44.059369   25306 kubeadm.go:883] updating cluster {Name:ha-450021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 13:54:44.059491   25306 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 13:54:44.059551   25306 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 13:54:44.090998   25306 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1014 13:54:44.091053   25306 ssh_runner.go:195] Run: which lz4
	I1014 13:54:44.094706   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1014 13:54:44.094776   25306 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 13:54:44.098775   25306 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 13:54:44.098800   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1014 13:54:45.421436   25306 crio.go:462] duration metric: took 1.326676583s to copy over tarball
	I1014 13:54:45.421513   25306 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 13:54:47.393636   25306 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.97209405s)
	I1014 13:54:47.393677   25306 crio.go:469] duration metric: took 1.97220742s to extract the tarball
	I1014 13:54:47.393687   25306 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 13:54:47.430848   25306 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 13:54:47.475174   25306 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 13:54:47.475197   25306 cache_images.go:84] Images are preloaded, skipping loading
	I1014 13:54:47.475204   25306 kubeadm.go:934] updating node { 192.168.39.176 8443 v1.31.1 crio true true} ...
	I1014 13:54:47.475299   25306 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-450021 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.176
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 13:54:47.475375   25306 ssh_runner.go:195] Run: crio config
	I1014 13:54:47.520162   25306 cni.go:84] Creating CNI manager for ""
	I1014 13:54:47.520183   25306 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 13:54:47.520192   25306 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 13:54:47.520214   25306 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.176 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-450021 NodeName:ha-450021 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.176"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.176 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 13:54:47.520316   25306 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.176
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-450021"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.176"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.176"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 13:54:47.520338   25306 kube-vip.go:115] generating kube-vip config ...
	I1014 13:54:47.520375   25306 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1014 13:54:47.537448   25306 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1014 13:54:47.537535   25306 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1014 13:54:47.537577   25306 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 13:54:47.551104   25306 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 13:54:47.551176   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1014 13:54:47.562687   25306 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1014 13:54:47.578926   25306 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 13:54:47.594827   25306 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1014 13:54:47.610693   25306 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1014 13:54:47.626695   25306 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1014 13:54:47.630338   25306 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 13:54:47.642280   25306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:54:47.756050   25306 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 13:54:47.773461   25306 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021 for IP: 192.168.39.176
	I1014 13:54:47.773484   25306 certs.go:194] generating shared ca certs ...
	I1014 13:54:47.773503   25306 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:54:47.773705   25306 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 13:54:47.773829   25306 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 13:54:47.773848   25306 certs.go:256] generating profile certs ...
	I1014 13:54:47.773913   25306 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.key
	I1014 13:54:47.773930   25306 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.crt with IP's: []
	I1014 13:54:48.113501   25306 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.crt ...
	I1014 13:54:48.113531   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.crt: {Name:mkbf9820119866d476b6914d2148d200b676c657 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:54:48.113715   25306 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.key ...
	I1014 13:54:48.113731   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.key: {Name:mk7d74bdc4633efc50efa47cc87ab000404cd20c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:54:48.113831   25306 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.1083e180
	I1014 13:54:48.113850   25306 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.1083e180 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.176 192.168.39.254]
	I1014 13:54:48.267925   25306 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.1083e180 ...
	I1014 13:54:48.267957   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.1083e180: {Name:mkd19ba2c223d25d9a0673db3befa3152f7a2c70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:54:48.268143   25306 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.1083e180 ...
	I1014 13:54:48.268160   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.1083e180: {Name:mkd725fc60a32f585bc691d5e3dd373c3c488835 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:54:48.268262   25306 certs.go:381] copying /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.1083e180 -> /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt
	I1014 13:54:48.268370   25306 certs.go:385] copying /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.1083e180 -> /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key
	I1014 13:54:48.268460   25306 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key
	I1014 13:54:48.268481   25306 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.crt with IP's: []
	I1014 13:54:48.434515   25306 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.crt ...
	I1014 13:54:48.434539   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.crt: {Name:mk37070511c0eff0f5c442e93060bbaddee85673 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:54:48.434689   25306 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key ...
	I1014 13:54:48.434700   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key: {Name:mk4252d17e842b88b135b952004ba8203bf67100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:54:48.434774   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 13:54:48.434791   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 13:54:48.434801   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 13:54:48.434813   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 13:54:48.434823   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 13:54:48.434833   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 13:54:48.434843   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 13:54:48.434854   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 13:54:48.434895   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 13:54:48.434936   25306 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 13:54:48.434945   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 13:54:48.434969   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 13:54:48.434990   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 13:54:48.435010   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 13:54:48.435044   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 13:54:48.435072   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> /usr/share/ca-certificates/150232.pem
	I1014 13:54:48.435084   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:54:48.435096   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem -> /usr/share/ca-certificates/15023.pem
	I1014 13:54:48.436322   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 13:54:48.461913   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 13:54:48.484404   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 13:54:48.506815   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 13:54:48.532871   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 13:54:48.555023   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 13:54:48.577102   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 13:54:48.599841   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 13:54:48.622100   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 13:54:48.644244   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 13:54:48.666067   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 13:54:48.688272   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 13:54:48.704452   25306 ssh_runner.go:195] Run: openssl version
	I1014 13:54:48.709950   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 13:54:48.720462   25306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:54:48.724736   25306 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:54:48.724786   25306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:54:48.730515   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 13:54:48.740926   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 13:54:48.751163   25306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 13:54:48.755136   25306 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 13:54:48.755173   25306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 13:54:48.760601   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 13:54:48.771042   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 13:54:48.781517   25306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 13:54:48.785721   25306 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 13:54:48.785757   25306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 13:54:48.791039   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 13:54:48.801295   25306 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 13:54:48.805300   25306 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 13:54:48.805353   25306 kubeadm.go:392] StartCluster: {Name:ha-450021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 13:54:48.805425   25306 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 13:54:48.805474   25306 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 13:54:48.846958   25306 cri.go:89] found id: ""
	I1014 13:54:48.847017   25306 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 13:54:48.856997   25306 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 13:54:48.866515   25306 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 13:54:48.876223   25306 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 13:54:48.876241   25306 kubeadm.go:157] found existing configuration files:
	
	I1014 13:54:48.876288   25306 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 13:54:48.885144   25306 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 13:54:48.885195   25306 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 13:54:48.894355   25306 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 13:54:48.902957   25306 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 13:54:48.903009   25306 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 13:54:48.912153   25306 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 13:54:48.921701   25306 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 13:54:48.921759   25306 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 13:54:48.931128   25306 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 13:54:48.939839   25306 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 13:54:48.939871   25306 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 13:54:48.948948   25306 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 13:54:49.168356   25306 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 13:55:00.103864   25306 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1014 13:55:00.103941   25306 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 13:55:00.104029   25306 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 13:55:00.104143   25306 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 13:55:00.104280   25306 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 13:55:00.104375   25306 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 13:55:00.106272   25306 out.go:235]   - Generating certificates and keys ...
	I1014 13:55:00.106362   25306 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 13:55:00.106429   25306 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 13:55:00.106511   25306 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 13:55:00.106612   25306 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1014 13:55:00.106709   25306 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1014 13:55:00.106793   25306 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1014 13:55:00.106864   25306 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1014 13:55:00.107022   25306 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-450021 localhost] and IPs [192.168.39.176 127.0.0.1 ::1]
	I1014 13:55:00.107089   25306 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1014 13:55:00.107238   25306 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-450021 localhost] and IPs [192.168.39.176 127.0.0.1 ::1]
	I1014 13:55:00.107331   25306 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 13:55:00.107416   25306 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 13:55:00.107496   25306 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1014 13:55:00.107576   25306 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 13:55:00.107656   25306 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 13:55:00.107736   25306 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 13:55:00.107811   25306 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 13:55:00.107905   25306 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 13:55:00.107957   25306 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 13:55:00.108061   25306 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 13:55:00.108162   25306 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 13:55:00.109922   25306 out.go:235]   - Booting up control plane ...
	I1014 13:55:00.110034   25306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 13:55:00.110132   25306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 13:55:00.110214   25306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 13:55:00.110345   25306 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 13:55:00.110449   25306 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 13:55:00.110494   25306 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 13:55:00.110622   25306 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 13:55:00.110705   25306 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 13:55:00.110755   25306 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002174478s
	I1014 13:55:00.110843   25306 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1014 13:55:00.110911   25306 kubeadm.go:310] [api-check] The API server is healthy after 5.813875513s
	I1014 13:55:00.111034   25306 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 13:55:00.111171   25306 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 13:55:00.111231   25306 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 13:55:00.111391   25306 kubeadm.go:310] [mark-control-plane] Marking the node ha-450021 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 13:55:00.111441   25306 kubeadm.go:310] [bootstrap-token] Using token: e8eaxr.5trfuyfb27hv7e11
	I1014 13:55:00.112896   25306 out.go:235]   - Configuring RBAC rules ...
	I1014 13:55:00.113020   25306 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 13:55:00.113086   25306 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 13:55:00.113219   25306 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 13:55:00.113369   25306 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 13:55:00.113527   25306 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 13:55:00.113646   25306 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 13:55:00.113778   25306 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 13:55:00.113819   25306 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1014 13:55:00.113862   25306 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1014 13:55:00.113868   25306 kubeadm.go:310] 
	I1014 13:55:00.113922   25306 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1014 13:55:00.113928   25306 kubeadm.go:310] 
	I1014 13:55:00.113997   25306 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1014 13:55:00.114004   25306 kubeadm.go:310] 
	I1014 13:55:00.114048   25306 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1014 13:55:00.114129   25306 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 13:55:00.114180   25306 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 13:55:00.114188   25306 kubeadm.go:310] 
	I1014 13:55:00.114245   25306 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1014 13:55:00.114263   25306 kubeadm.go:310] 
	I1014 13:55:00.114330   25306 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 13:55:00.114341   25306 kubeadm.go:310] 
	I1014 13:55:00.114411   25306 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1014 13:55:00.114513   25306 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 13:55:00.114572   25306 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 13:55:00.114578   25306 kubeadm.go:310] 
	I1014 13:55:00.114693   25306 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 13:55:00.114784   25306 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1014 13:55:00.114793   25306 kubeadm.go:310] 
	I1014 13:55:00.114891   25306 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token e8eaxr.5trfuyfb27hv7e11 \
	I1014 13:55:00.114977   25306 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 \
	I1014 13:55:00.114998   25306 kubeadm.go:310] 	--control-plane 
	I1014 13:55:00.115002   25306 kubeadm.go:310] 
	I1014 13:55:00.115074   25306 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1014 13:55:00.115080   25306 kubeadm.go:310] 
	I1014 13:55:00.115154   25306 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token e8eaxr.5trfuyfb27hv7e11 \
	I1014 13:55:00.115275   25306 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 
	I1014 13:55:00.115292   25306 cni.go:84] Creating CNI manager for ""
	I1014 13:55:00.115302   25306 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 13:55:00.117091   25306 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1014 13:55:00.118483   25306 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1014 13:55:00.124368   25306 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1014 13:55:00.124388   25306 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1014 13:55:00.145958   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1014 13:55:00.528887   25306 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 13:55:00.528967   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:55:00.528987   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-450021 minikube.k8s.io/updated_at=2024_10_14T13_55_00_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=ha-450021 minikube.k8s.io/primary=true
	I1014 13:55:00.543744   25306 ops.go:34] apiserver oom_adj: -16
	I1014 13:55:00.662237   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:55:01.162275   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:55:01.662698   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:55:02.163027   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:55:02.662525   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:55:03.162972   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:55:03.662524   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:55:03.751160   25306 kubeadm.go:1113] duration metric: took 3.222260966s to wait for elevateKubeSystemPrivileges
	I1014 13:55:03.751200   25306 kubeadm.go:394] duration metric: took 14.945849765s to StartCluster
	I1014 13:55:03.751222   25306 settings.go:142] acquiring lock: {Name:mk9f6f6b9dc8c3435472077cbb9091b8e648d1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:55:03.751304   25306 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 13:55:03.752000   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/kubeconfig: {Name:mk17dd47b52fd7a8ee17f563c35b22ef1b7788f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:55:03.752256   25306 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 13:55:03.752277   25306 start.go:241] waiting for startup goroutines ...
	I1014 13:55:03.752262   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1014 13:55:03.752277   25306 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 13:55:03.752370   25306 addons.go:69] Setting storage-provisioner=true in profile "ha-450021"
	I1014 13:55:03.752388   25306 addons.go:234] Setting addon storage-provisioner=true in "ha-450021"
	I1014 13:55:03.752407   25306 addons.go:69] Setting default-storageclass=true in profile "ha-450021"
	I1014 13:55:03.752422   25306 host.go:66] Checking if "ha-450021" exists ...
	I1014 13:55:03.752435   25306 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:55:03.752440   25306 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-450021"
	I1014 13:55:03.752851   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:55:03.752853   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:55:03.752892   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:55:03.752907   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:55:03.768120   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40745
	I1014 13:55:03.768294   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36817
	I1014 13:55:03.768559   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:55:03.768773   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:55:03.769132   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:55:03.769156   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:55:03.769285   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:55:03.769308   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:55:03.769488   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:55:03.769594   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:55:03.769745   25306 main.go:141] libmachine: (ha-450021) Calling .GetState
	I1014 13:55:03.770040   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:55:03.770082   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:55:03.771657   25306 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 13:55:03.771868   25306 kapi.go:59] client config for ha-450021: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.crt", KeyFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.key", CAFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432aa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 13:55:03.772274   25306 cert_rotation.go:140] Starting client certificate rotation controller
	I1014 13:55:03.772426   25306 addons.go:234] Setting addon default-storageclass=true in "ha-450021"
	I1014 13:55:03.772458   25306 host.go:66] Checking if "ha-450021" exists ...
	I1014 13:55:03.772689   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:55:03.772720   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:55:03.785301   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39049
	I1014 13:55:03.785754   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:55:03.786274   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:55:03.786301   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:55:03.786653   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:55:03.786685   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37795
	I1014 13:55:03.786852   25306 main.go:141] libmachine: (ha-450021) Calling .GetState
	I1014 13:55:03.787134   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:55:03.787596   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:55:03.787621   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:55:03.787924   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:55:03.788463   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:55:03.788507   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:55:03.788527   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:55:03.790666   25306 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 13:55:03.791877   25306 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 13:55:03.791892   25306 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 13:55:03.791905   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:55:03.794484   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:55:03.794853   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:55:03.794881   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:55:03.794998   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:55:03.795150   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:55:03.795298   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:55:03.795425   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 13:55:03.804082   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36387
	I1014 13:55:03.804475   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:55:03.804871   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:55:03.804893   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:55:03.805154   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:55:03.805296   25306 main.go:141] libmachine: (ha-450021) Calling .GetState
	I1014 13:55:03.806617   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:55:03.806811   25306 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 13:55:03.806824   25306 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 13:55:03.806838   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:55:03.809334   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:55:03.809735   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:55:03.809764   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:55:03.809917   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:55:03.810083   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:55:03.810214   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:55:03.810346   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 13:55:03.916382   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1014 13:55:03.970762   25306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 13:55:04.045876   25306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 13:55:04.562851   25306 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1014 13:55:04.828250   25306 main.go:141] libmachine: Making call to close driver server
	I1014 13:55:04.828267   25306 main.go:141] libmachine: Making call to close driver server
	I1014 13:55:04.828285   25306 main.go:141] libmachine: (ha-450021) Calling .Close
	I1014 13:55:04.828272   25306 main.go:141] libmachine: (ha-450021) Calling .Close
	I1014 13:55:04.828566   25306 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:55:04.828578   25306 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:55:04.828586   25306 main.go:141] libmachine: Making call to close driver server
	I1014 13:55:04.828592   25306 main.go:141] libmachine: (ha-450021) Calling .Close
	I1014 13:55:04.828628   25306 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:55:04.828642   25306 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:55:04.828650   25306 main.go:141] libmachine: Making call to close driver server
	I1014 13:55:04.828657   25306 main.go:141] libmachine: (ha-450021) Calling .Close
	I1014 13:55:04.828760   25306 main.go:141] libmachine: (ha-450021) DBG | Closing plugin on server side
	I1014 13:55:04.828781   25306 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:55:04.828790   25306 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:55:04.830286   25306 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:55:04.830303   25306 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:55:04.830318   25306 main.go:141] libmachine: (ha-450021) DBG | Closing plugin on server side
	I1014 13:55:04.830357   25306 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 13:55:04.830377   25306 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 13:55:04.830467   25306 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1014 13:55:04.830477   25306 round_trippers.go:469] Request Headers:
	I1014 13:55:04.830487   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:55:04.830500   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:55:04.851944   25306 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I1014 13:55:04.852525   25306 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1014 13:55:04.852541   25306 round_trippers.go:469] Request Headers:
	I1014 13:55:04.852549   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:55:04.852558   25306 round_trippers.go:473]     Content-Type: application/json
	I1014 13:55:04.852569   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:55:04.860873   25306 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1014 13:55:04.863865   25306 main.go:141] libmachine: Making call to close driver server
	I1014 13:55:04.863890   25306 main.go:141] libmachine: (ha-450021) Calling .Close
	I1014 13:55:04.864194   25306 main.go:141] libmachine: (ha-450021) DBG | Closing plugin on server side
	I1014 13:55:04.864235   25306 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:55:04.864246   25306 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:55:04.865910   25306 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1014 13:55:04.867207   25306 addons.go:510] duration metric: took 1.114927542s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1014 13:55:04.867245   25306 start.go:246] waiting for cluster config update ...
	I1014 13:55:04.867260   25306 start.go:255] writing updated cluster config ...
	I1014 13:55:04.868981   25306 out.go:201] 
	I1014 13:55:04.870358   25306 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:55:04.870432   25306 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/config.json ...
	I1014 13:55:04.871998   25306 out.go:177] * Starting "ha-450021-m02" control-plane node in "ha-450021" cluster
	I1014 13:55:04.873148   25306 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 13:55:04.873168   25306 cache.go:56] Caching tarball of preloaded images
	I1014 13:55:04.873259   25306 preload.go:172] Found /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 13:55:04.873270   25306 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1014 13:55:04.873348   25306 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/config.json ...
	I1014 13:55:04.873725   25306 start.go:360] acquireMachinesLock for ha-450021-m02: {Name:mk0b928e3a7c606d1470b2064477a22c5e59969d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 13:55:04.873773   25306 start.go:364] duration metric: took 27.606µs to acquireMachinesLock for "ha-450021-m02"
	I1014 13:55:04.873797   25306 start.go:93] Provisioning new machine with config: &{Name:ha-450021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 13:55:04.873856   25306 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1014 13:55:04.875450   25306 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 13:55:04.875534   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:55:04.875571   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:55:04.891858   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40523
	I1014 13:55:04.892468   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:55:04.893080   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:55:04.893101   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:55:04.893416   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:55:04.893639   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetMachineName
	I1014 13:55:04.893812   25306 main.go:141] libmachine: (ha-450021-m02) Calling .DriverName
	I1014 13:55:04.894009   25306 start.go:159] libmachine.API.Create for "ha-450021" (driver="kvm2")
	I1014 13:55:04.894037   25306 client.go:168] LocalClient.Create starting
	I1014 13:55:04.894069   25306 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem
	I1014 13:55:04.894114   25306 main.go:141] libmachine: Decoding PEM data...
	I1014 13:55:04.894134   25306 main.go:141] libmachine: Parsing certificate...
	I1014 13:55:04.894211   25306 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem
	I1014 13:55:04.894240   25306 main.go:141] libmachine: Decoding PEM data...
	I1014 13:55:04.894258   25306 main.go:141] libmachine: Parsing certificate...
	I1014 13:55:04.894285   25306 main.go:141] libmachine: Running pre-create checks...
	I1014 13:55:04.894306   25306 main.go:141] libmachine: (ha-450021-m02) Calling .PreCreateCheck
	I1014 13:55:04.894485   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetConfigRaw
	I1014 13:55:04.894889   25306 main.go:141] libmachine: Creating machine...
	I1014 13:55:04.894903   25306 main.go:141] libmachine: (ha-450021-m02) Calling .Create
	I1014 13:55:04.895072   25306 main.go:141] libmachine: (ha-450021-m02) Creating KVM machine...
	I1014 13:55:04.896272   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found existing default KVM network
	I1014 13:55:04.896429   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found existing private KVM network mk-ha-450021
	I1014 13:55:04.896566   25306 main.go:141] libmachine: (ha-450021-m02) Setting up store path in /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02 ...
	I1014 13:55:04.896592   25306 main.go:141] libmachine: (ha-450021-m02) Building disk image from file:///home/jenkins/minikube-integration/19790-7836/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1014 13:55:04.896679   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:04.896574   25672 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 13:55:04.896767   25306 main.go:141] libmachine: (ha-450021-m02) Downloading /home/jenkins/minikube-integration/19790-7836/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19790-7836/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1014 13:55:05.156236   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:05.156095   25672 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/id_rsa...
	I1014 13:55:05.229289   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:05.229176   25672 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/ha-450021-m02.rawdisk...
	I1014 13:55:05.229317   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Writing magic tar header
	I1014 13:55:05.229327   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Writing SSH key tar header
	I1014 13:55:05.229334   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:05.229291   25672 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02 ...
	I1014 13:55:05.229448   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02
	I1014 13:55:05.229476   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube/machines
	I1014 13:55:05.229494   25306 main.go:141] libmachine: (ha-450021-m02) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02 (perms=drwx------)
	I1014 13:55:05.229512   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 13:55:05.229525   25306 main.go:141] libmachine: (ha-450021-m02) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube/machines (perms=drwxr-xr-x)
	I1014 13:55:05.229536   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836
	I1014 13:55:05.229551   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1014 13:55:05.229562   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Checking permissions on dir: /home/jenkins
	I1014 13:55:05.229576   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Checking permissions on dir: /home
	I1014 13:55:05.229584   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Skipping /home - not owner
	I1014 13:55:05.229634   25306 main.go:141] libmachine: (ha-450021-m02) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube (perms=drwxr-xr-x)
	I1014 13:55:05.229673   25306 main.go:141] libmachine: (ha-450021-m02) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836 (perms=drwxrwxr-x)
	I1014 13:55:05.229699   25306 main.go:141] libmachine: (ha-450021-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1014 13:55:05.229714   25306 main.go:141] libmachine: (ha-450021-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1014 13:55:05.229724   25306 main.go:141] libmachine: (ha-450021-m02) Creating domain...
	I1014 13:55:05.230559   25306 main.go:141] libmachine: (ha-450021-m02) define libvirt domain using xml: 
	I1014 13:55:05.230582   25306 main.go:141] libmachine: (ha-450021-m02) <domain type='kvm'>
	I1014 13:55:05.230608   25306 main.go:141] libmachine: (ha-450021-m02)   <name>ha-450021-m02</name>
	I1014 13:55:05.230626   25306 main.go:141] libmachine: (ha-450021-m02)   <memory unit='MiB'>2200</memory>
	I1014 13:55:05.230636   25306 main.go:141] libmachine: (ha-450021-m02)   <vcpu>2</vcpu>
	I1014 13:55:05.230650   25306 main.go:141] libmachine: (ha-450021-m02)   <features>
	I1014 13:55:05.230660   25306 main.go:141] libmachine: (ha-450021-m02)     <acpi/>
	I1014 13:55:05.230666   25306 main.go:141] libmachine: (ha-450021-m02)     <apic/>
	I1014 13:55:05.230676   25306 main.go:141] libmachine: (ha-450021-m02)     <pae/>
	I1014 13:55:05.230682   25306 main.go:141] libmachine: (ha-450021-m02)     
	I1014 13:55:05.230689   25306 main.go:141] libmachine: (ha-450021-m02)   </features>
	I1014 13:55:05.230699   25306 main.go:141] libmachine: (ha-450021-m02)   <cpu mode='host-passthrough'>
	I1014 13:55:05.230706   25306 main.go:141] libmachine: (ha-450021-m02)   
	I1014 13:55:05.230711   25306 main.go:141] libmachine: (ha-450021-m02)   </cpu>
	I1014 13:55:05.230718   25306 main.go:141] libmachine: (ha-450021-m02)   <os>
	I1014 13:55:05.230728   25306 main.go:141] libmachine: (ha-450021-m02)     <type>hvm</type>
	I1014 13:55:05.230739   25306 main.go:141] libmachine: (ha-450021-m02)     <boot dev='cdrom'/>
	I1014 13:55:05.230748   25306 main.go:141] libmachine: (ha-450021-m02)     <boot dev='hd'/>
	I1014 13:55:05.230763   25306 main.go:141] libmachine: (ha-450021-m02)     <bootmenu enable='no'/>
	I1014 13:55:05.230773   25306 main.go:141] libmachine: (ha-450021-m02)   </os>
	I1014 13:55:05.230780   25306 main.go:141] libmachine: (ha-450021-m02)   <devices>
	I1014 13:55:05.230790   25306 main.go:141] libmachine: (ha-450021-m02)     <disk type='file' device='cdrom'>
	I1014 13:55:05.230819   25306 main.go:141] libmachine: (ha-450021-m02)       <source file='/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/boot2docker.iso'/>
	I1014 13:55:05.230839   25306 main.go:141] libmachine: (ha-450021-m02)       <target dev='hdc' bus='scsi'/>
	I1014 13:55:05.230847   25306 main.go:141] libmachine: (ha-450021-m02)       <readonly/>
	I1014 13:55:05.230854   25306 main.go:141] libmachine: (ha-450021-m02)     </disk>
	I1014 13:55:05.230864   25306 main.go:141] libmachine: (ha-450021-m02)     <disk type='file' device='disk'>
	I1014 13:55:05.230881   25306 main.go:141] libmachine: (ha-450021-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1014 13:55:05.230897   25306 main.go:141] libmachine: (ha-450021-m02)       <source file='/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/ha-450021-m02.rawdisk'/>
	I1014 13:55:05.230912   25306 main.go:141] libmachine: (ha-450021-m02)       <target dev='hda' bus='virtio'/>
	I1014 13:55:05.230923   25306 main.go:141] libmachine: (ha-450021-m02)     </disk>
	I1014 13:55:05.230933   25306 main.go:141] libmachine: (ha-450021-m02)     <interface type='network'>
	I1014 13:55:05.230942   25306 main.go:141] libmachine: (ha-450021-m02)       <source network='mk-ha-450021'/>
	I1014 13:55:05.230949   25306 main.go:141] libmachine: (ha-450021-m02)       <model type='virtio'/>
	I1014 13:55:05.230956   25306 main.go:141] libmachine: (ha-450021-m02)     </interface>
	I1014 13:55:05.230966   25306 main.go:141] libmachine: (ha-450021-m02)     <interface type='network'>
	I1014 13:55:05.230975   25306 main.go:141] libmachine: (ha-450021-m02)       <source network='default'/>
	I1014 13:55:05.230987   25306 main.go:141] libmachine: (ha-450021-m02)       <model type='virtio'/>
	I1014 13:55:05.230998   25306 main.go:141] libmachine: (ha-450021-m02)     </interface>
	I1014 13:55:05.231008   25306 main.go:141] libmachine: (ha-450021-m02)     <serial type='pty'>
	I1014 13:55:05.231016   25306 main.go:141] libmachine: (ha-450021-m02)       <target port='0'/>
	I1014 13:55:05.231026   25306 main.go:141] libmachine: (ha-450021-m02)     </serial>
	I1014 13:55:05.231034   25306 main.go:141] libmachine: (ha-450021-m02)     <console type='pty'>
	I1014 13:55:05.231042   25306 main.go:141] libmachine: (ha-450021-m02)       <target type='serial' port='0'/>
	I1014 13:55:05.231047   25306 main.go:141] libmachine: (ha-450021-m02)     </console>
	I1014 13:55:05.231060   25306 main.go:141] libmachine: (ha-450021-m02)     <rng model='virtio'>
	I1014 13:55:05.231073   25306 main.go:141] libmachine: (ha-450021-m02)       <backend model='random'>/dev/random</backend>
	I1014 13:55:05.231079   25306 main.go:141] libmachine: (ha-450021-m02)     </rng>
	I1014 13:55:05.231090   25306 main.go:141] libmachine: (ha-450021-m02)     
	I1014 13:55:05.231096   25306 main.go:141] libmachine: (ha-450021-m02)     
	I1014 13:55:05.231107   25306 main.go:141] libmachine: (ha-450021-m02)   </devices>
	I1014 13:55:05.231116   25306 main.go:141] libmachine: (ha-450021-m02) </domain>
	I1014 13:55:05.231125   25306 main.go:141] libmachine: (ha-450021-m02) 
	I1014 13:55:05.238505   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:39:fb:46 in network default
	I1014 13:55:05.239084   25306 main.go:141] libmachine: (ha-450021-m02) Ensuring networks are active...
	I1014 13:55:05.239109   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:05.239788   25306 main.go:141] libmachine: (ha-450021-m02) Ensuring network default is active
	I1014 13:55:05.240113   25306 main.go:141] libmachine: (ha-450021-m02) Ensuring network mk-ha-450021 is active
	I1014 13:55:05.240488   25306 main.go:141] libmachine: (ha-450021-m02) Getting domain xml...
	I1014 13:55:05.241224   25306 main.go:141] libmachine: (ha-450021-m02) Creating domain...
	I1014 13:55:06.508569   25306 main.go:141] libmachine: (ha-450021-m02) Waiting to get IP...
	I1014 13:55:06.509274   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:06.509728   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:06.509800   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:06.509721   25672 retry.go:31] will retry after 253.994001ms: waiting for machine to come up
	I1014 13:55:06.765296   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:06.765720   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:06.765754   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:06.765695   25672 retry.go:31] will retry after 330.390593ms: waiting for machine to come up
	I1014 13:55:07.097342   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:07.097779   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:07.097809   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:07.097725   25672 retry.go:31] will retry after 315.743674ms: waiting for machine to come up
	I1014 13:55:07.414954   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:07.415551   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:07.415596   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:07.415518   25672 retry.go:31] will retry after 505.396104ms: waiting for machine to come up
	I1014 13:55:07.922086   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:07.922530   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:07.922555   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:07.922518   25672 retry.go:31] will retry after 762.026701ms: waiting for machine to come up
	I1014 13:55:08.686471   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:08.686874   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:08.686903   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:08.686842   25672 retry.go:31] will retry after 891.989591ms: waiting for machine to come up
	I1014 13:55:09.580677   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:09.581174   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:09.581195   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:09.581150   25672 retry.go:31] will retry after 716.006459ms: waiting for machine to come up
	I1014 13:55:10.299036   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:10.299435   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:10.299462   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:10.299390   25672 retry.go:31] will retry after 999.038321ms: waiting for machine to come up
	I1014 13:55:11.299678   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:11.300155   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:11.300182   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:11.300092   25672 retry.go:31] will retry after 1.384319167s: waiting for machine to come up
	I1014 13:55:12.686664   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:12.687084   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:12.687130   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:12.687031   25672 retry.go:31] will retry after 1.750600606s: waiting for machine to come up
	I1014 13:55:14.439721   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:14.440157   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:14.440185   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:14.440132   25672 retry.go:31] will retry after 2.719291498s: waiting for machine to come up
	I1014 13:55:17.160916   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:17.161338   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:17.161359   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:17.161288   25672 retry.go:31] will retry after 2.934487947s: waiting for machine to come up
	I1014 13:55:20.097623   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:20.098033   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:20.098054   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:20.097994   25672 retry.go:31] will retry after 3.495468914s: waiting for machine to come up
	I1014 13:55:23.597556   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:23.598084   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:23.598105   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:23.598043   25672 retry.go:31] will retry after 4.955902252s: waiting for machine to come up
	I1014 13:55:28.555767   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:28.556335   25306 main.go:141] libmachine: (ha-450021-m02) Found IP for machine: 192.168.39.89
	I1014 13:55:28.556360   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has current primary IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:28.556369   25306 main.go:141] libmachine: (ha-450021-m02) Reserving static IP address...
	I1014 13:55:28.556652   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find host DHCP lease matching {name: "ha-450021-m02", mac: "52:54:00:51:58:78", ip: "192.168.39.89"} in network mk-ha-450021
	I1014 13:55:28.627598   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Getting to WaitForSSH function...
	I1014 13:55:28.627633   25306 main.go:141] libmachine: (ha-450021-m02) Reserved static IP address: 192.168.39.89
	I1014 13:55:28.627646   25306 main.go:141] libmachine: (ha-450021-m02) Waiting for SSH to be available...
	I1014 13:55:28.629843   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:28.630161   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021
	I1014 13:55:28.630190   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find defined IP address of network mk-ha-450021 interface with MAC address 52:54:00:51:58:78
	I1014 13:55:28.630310   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Using SSH client type: external
	I1014 13:55:28.630337   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/id_rsa (-rw-------)
	I1014 13:55:28.630368   25306 main.go:141] libmachine: (ha-450021-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 13:55:28.630381   25306 main.go:141] libmachine: (ha-450021-m02) DBG | About to run SSH command:
	I1014 13:55:28.630396   25306 main.go:141] libmachine: (ha-450021-m02) DBG | exit 0
	I1014 13:55:28.634134   25306 main.go:141] libmachine: (ha-450021-m02) DBG | SSH cmd err, output: exit status 255: 
	I1014 13:55:28.634150   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1014 13:55:28.634157   25306 main.go:141] libmachine: (ha-450021-m02) DBG | command : exit 0
	I1014 13:55:28.634162   25306 main.go:141] libmachine: (ha-450021-m02) DBG | err     : exit status 255
	I1014 13:55:28.634170   25306 main.go:141] libmachine: (ha-450021-m02) DBG | output  : 
	I1014 13:55:31.634385   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Getting to WaitForSSH function...
	I1014 13:55:31.636814   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:31.637121   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:31.637150   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:31.637249   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Using SSH client type: external
	I1014 13:55:31.637272   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/id_rsa (-rw-------)
	I1014 13:55:31.637290   25306 main.go:141] libmachine: (ha-450021-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 13:55:31.637302   25306 main.go:141] libmachine: (ha-450021-m02) DBG | About to run SSH command:
	I1014 13:55:31.637327   25306 main.go:141] libmachine: (ha-450021-m02) DBG | exit 0
	I1014 13:55:31.762693   25306 main.go:141] libmachine: (ha-450021-m02) DBG | SSH cmd err, output: <nil>: 
	I1014 13:55:31.762993   25306 main.go:141] libmachine: (ha-450021-m02) KVM machine creation complete!
	I1014 13:55:31.763308   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetConfigRaw
	I1014 13:55:31.763786   25306 main.go:141] libmachine: (ha-450021-m02) Calling .DriverName
	I1014 13:55:31.763969   25306 main.go:141] libmachine: (ha-450021-m02) Calling .DriverName
	I1014 13:55:31.764130   25306 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1014 13:55:31.764154   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetState
	I1014 13:55:31.765484   25306 main.go:141] libmachine: Detecting operating system of created instance...
	I1014 13:55:31.765498   25306 main.go:141] libmachine: Waiting for SSH to be available...
	I1014 13:55:31.765506   25306 main.go:141] libmachine: Getting to WaitForSSH function...
	I1014 13:55:31.765513   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	I1014 13:55:31.767968   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:31.768352   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:31.768386   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:31.768540   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHPort
	I1014 13:55:31.768701   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:31.768883   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:31.769050   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHUsername
	I1014 13:55:31.769231   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:55:31.769460   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1014 13:55:31.769474   25306 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1014 13:55:31.877746   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 13:55:31.877770   25306 main.go:141] libmachine: Detecting the provisioner...
	I1014 13:55:31.877779   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	I1014 13:55:31.880489   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:31.880858   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:31.880884   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:31.881034   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHPort
	I1014 13:55:31.881200   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:31.881337   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:31.881482   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHUsername
	I1014 13:55:31.881602   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:55:31.881767   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1014 13:55:31.881780   25306 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1014 13:55:31.995447   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1014 13:55:31.995515   25306 main.go:141] libmachine: found compatible host: buildroot
	I1014 13:55:31.995529   25306 main.go:141] libmachine: Provisioning with buildroot...
	I1014 13:55:31.995541   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetMachineName
	I1014 13:55:31.995787   25306 buildroot.go:166] provisioning hostname "ha-450021-m02"
	I1014 13:55:31.995817   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetMachineName
	I1014 13:55:31.995999   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	I1014 13:55:31.998434   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:31.998820   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:31.998841   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:31.998986   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHPort
	I1014 13:55:31.999184   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:31.999375   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:31.999496   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHUsername
	I1014 13:55:31.999675   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:55:31.999836   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1014 13:55:31.999847   25306 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-450021-m02 && echo "ha-450021-m02" | sudo tee /etc/hostname
	I1014 13:55:32.125055   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-450021-m02
	
	I1014 13:55:32.125093   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	I1014 13:55:32.128764   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.129158   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:32.129191   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.129369   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHPort
	I1014 13:55:32.129548   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:32.129704   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:32.129831   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHUsername
	I1014 13:55:32.129997   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:55:32.130195   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1014 13:55:32.130212   25306 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-450021-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-450021-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-450021-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 13:55:32.251676   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 13:55:32.251705   25306 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 13:55:32.251731   25306 buildroot.go:174] setting up certificates
	I1014 13:55:32.251744   25306 provision.go:84] configureAuth start
	I1014 13:55:32.251763   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetMachineName
	I1014 13:55:32.252028   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetIP
	I1014 13:55:32.254513   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.254862   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:32.254887   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.255045   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	I1014 13:55:32.257083   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.257408   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:32.257435   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.257565   25306 provision.go:143] copyHostCerts
	I1014 13:55:32.257592   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 13:55:32.257618   25306 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 13:55:32.257629   25306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 13:55:32.257712   25306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 13:55:32.257797   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 13:55:32.257821   25306 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 13:55:32.257831   25306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 13:55:32.257870   25306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 13:55:32.257928   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 13:55:32.257951   25306 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 13:55:32.257959   25306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 13:55:32.257986   25306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 13:55:32.258053   25306 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.ha-450021-m02 san=[127.0.0.1 192.168.39.89 ha-450021-m02 localhost minikube]
	I1014 13:55:32.418210   25306 provision.go:177] copyRemoteCerts
	I1014 13:55:32.418267   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 13:55:32.418287   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	I1014 13:55:32.421033   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.421356   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:32.421387   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.421587   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHPort
	I1014 13:55:32.421794   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:32.421949   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHUsername
	I1014 13:55:32.422067   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/id_rsa Username:docker}
	I1014 13:55:32.508850   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 13:55:32.508917   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 13:55:32.534047   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 13:55:32.534120   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1014 13:55:32.558263   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 13:55:32.558335   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 13:55:32.582102   25306 provision.go:87] duration metric: took 330.342541ms to configureAuth
	I1014 13:55:32.582134   25306 buildroot.go:189] setting minikube options for container-runtime
	I1014 13:55:32.582301   25306 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:55:32.582371   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	I1014 13:55:32.584832   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.585166   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:32.585192   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.585349   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHPort
	I1014 13:55:32.585528   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:32.585644   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:32.585802   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHUsername
	I1014 13:55:32.585929   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:55:32.586092   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1014 13:55:32.586111   25306 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 13:55:32.822330   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 13:55:32.822358   25306 main.go:141] libmachine: Checking connection to Docker...
	I1014 13:55:32.822366   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetURL
	I1014 13:55:32.823614   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Using libvirt version 6000000
	I1014 13:55:32.826190   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.826546   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:32.826567   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.826737   25306 main.go:141] libmachine: Docker is up and running!
	I1014 13:55:32.826754   25306 main.go:141] libmachine: Reticulating splines...
	I1014 13:55:32.826772   25306 client.go:171] duration metric: took 27.932717671s to LocalClient.Create
	I1014 13:55:32.826803   25306 start.go:167] duration metric: took 27.93279451s to libmachine.API.Create "ha-450021"
	I1014 13:55:32.826815   25306 start.go:293] postStartSetup for "ha-450021-m02" (driver="kvm2")
	I1014 13:55:32.826825   25306 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 13:55:32.826846   25306 main.go:141] libmachine: (ha-450021-m02) Calling .DriverName
	I1014 13:55:32.827073   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 13:55:32.827097   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	I1014 13:55:32.829440   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.829745   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:32.829785   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.829885   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHPort
	I1014 13:55:32.830054   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:32.830208   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHUsername
	I1014 13:55:32.830348   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/id_rsa Username:docker}
	I1014 13:55:32.918434   25306 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 13:55:32.922919   25306 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 13:55:32.922947   25306 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 13:55:32.923010   25306 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 13:55:32.923092   25306 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 13:55:32.923101   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> /etc/ssl/certs/150232.pem
	I1014 13:55:32.923187   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 13:55:32.933129   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 13:55:32.957819   25306 start.go:296] duration metric: took 130.989484ms for postStartSetup
	I1014 13:55:32.957871   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetConfigRaw
	I1014 13:55:32.958438   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetIP
	I1014 13:55:32.961024   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.961393   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:32.961425   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.961630   25306 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/config.json ...
	I1014 13:55:32.961835   25306 start.go:128] duration metric: took 28.087968814s to createHost
	I1014 13:55:32.961858   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	I1014 13:55:32.964121   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.964493   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:32.964528   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.964702   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHPort
	I1014 13:55:32.964854   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:32.964966   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:32.965109   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHUsername
	I1014 13:55:32.965227   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:55:32.965432   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1014 13:55:32.965446   25306 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 13:55:33.079362   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728914133.060490571
	
	I1014 13:55:33.079386   25306 fix.go:216] guest clock: 1728914133.060490571
	I1014 13:55:33.079405   25306 fix.go:229] Guest: 2024-10-14 13:55:33.060490571 +0000 UTC Remote: 2024-10-14 13:55:32.961847349 +0000 UTC m=+73.185560400 (delta=98.643222ms)
	I1014 13:55:33.079425   25306 fix.go:200] guest clock delta is within tolerance: 98.643222ms
	I1014 13:55:33.079431   25306 start.go:83] releasing machines lock for "ha-450021-m02", held for 28.205646747s
	I1014 13:55:33.079452   25306 main.go:141] libmachine: (ha-450021-m02) Calling .DriverName
	I1014 13:55:33.079689   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetIP
	I1014 13:55:33.082245   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:33.082619   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:33.082645   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:33.085035   25306 out.go:177] * Found network options:
	I1014 13:55:33.086426   25306 out.go:177]   - NO_PROXY=192.168.39.176
	W1014 13:55:33.087574   25306 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 13:55:33.087613   25306 main.go:141] libmachine: (ha-450021-m02) Calling .DriverName
	I1014 13:55:33.088138   25306 main.go:141] libmachine: (ha-450021-m02) Calling .DriverName
	I1014 13:55:33.088304   25306 main.go:141] libmachine: (ha-450021-m02) Calling .DriverName
	I1014 13:55:33.088401   25306 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 13:55:33.088445   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	W1014 13:55:33.088467   25306 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 13:55:33.088536   25306 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 13:55:33.088557   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	I1014 13:55:33.091084   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:33.091105   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:33.091497   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:33.091525   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:33.091546   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:33.091562   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:33.091675   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHPort
	I1014 13:55:33.091813   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHPort
	I1014 13:55:33.091867   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:33.091959   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:33.092027   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHUsername
	I1014 13:55:33.092088   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHUsername
	I1014 13:55:33.092156   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/id_rsa Username:docker}
	I1014 13:55:33.092203   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/id_rsa Username:docker}
	I1014 13:55:33.324240   25306 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 13:55:33.330527   25306 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 13:55:33.330586   25306 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 13:55:33.345640   25306 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 13:55:33.345657   25306 start.go:495] detecting cgroup driver to use...
	I1014 13:55:33.345701   25306 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 13:55:33.361741   25306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 13:55:33.375019   25306 docker.go:217] disabling cri-docker service (if available) ...
	I1014 13:55:33.375071   25306 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 13:55:33.388301   25306 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 13:55:33.401227   25306 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 13:55:33.511329   25306 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 13:55:33.658848   25306 docker.go:233] disabling docker service ...
	I1014 13:55:33.658913   25306 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 13:55:33.673279   25306 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 13:55:33.685917   25306 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 13:55:33.818316   25306 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 13:55:33.936222   25306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 13:55:33.950467   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 13:55:33.970208   25306 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 13:55:33.970265   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:55:33.984110   25306 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 13:55:33.984169   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:55:33.995549   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:55:34.006565   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:55:34.018479   25306 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 13:55:34.030013   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:55:34.041645   25306 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:55:34.059707   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:55:34.070442   25306 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 13:55:34.080309   25306 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 13:55:34.080366   25306 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 13:55:34.093735   25306 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 13:55:34.103445   25306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:55:34.215901   25306 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 13:55:34.308754   25306 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 13:55:34.308820   25306 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 13:55:34.313625   25306 start.go:563] Will wait 60s for crictl version
	I1014 13:55:34.313676   25306 ssh_runner.go:195] Run: which crictl
	I1014 13:55:34.317635   25306 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 13:55:34.356534   25306 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 13:55:34.356604   25306 ssh_runner.go:195] Run: crio --version
	I1014 13:55:34.384187   25306 ssh_runner.go:195] Run: crio --version
	I1014 13:55:34.414404   25306 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1014 13:55:34.415699   25306 out.go:177]   - env NO_PROXY=192.168.39.176
	I1014 13:55:34.416965   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetIP
	I1014 13:55:34.419296   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:34.419601   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:34.419628   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:34.419811   25306 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1014 13:55:34.423754   25306 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 13:55:34.435980   25306 mustload.go:65] Loading cluster: ha-450021
	I1014 13:55:34.436151   25306 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:55:34.436381   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:55:34.436419   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:55:34.450826   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35637
	I1014 13:55:34.451213   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:55:34.451655   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:55:34.451677   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:55:34.451944   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:55:34.452123   25306 main.go:141] libmachine: (ha-450021) Calling .GetState
	I1014 13:55:34.453521   25306 host.go:66] Checking if "ha-450021" exists ...
	I1014 13:55:34.453781   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:55:34.453811   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:55:34.467708   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35033
	I1014 13:55:34.468144   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:55:34.468583   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:55:34.468597   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:55:34.468863   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:55:34.469023   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:55:34.469168   25306 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021 for IP: 192.168.39.89
	I1014 13:55:34.469180   25306 certs.go:194] generating shared ca certs ...
	I1014 13:55:34.469197   25306 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:55:34.469314   25306 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 13:55:34.469365   25306 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 13:55:34.469378   25306 certs.go:256] generating profile certs ...
	I1014 13:55:34.469463   25306 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.key
	I1014 13:55:34.469494   25306 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.ffb9c796
	I1014 13:55:34.469515   25306 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.ffb9c796 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.176 192.168.39.89 192.168.39.254]
	I1014 13:55:34.810302   25306 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.ffb9c796 ...
	I1014 13:55:34.810336   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.ffb9c796: {Name:mk62309e383c07d7599f8a1200bdc69462a2d14a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:55:34.810530   25306 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.ffb9c796 ...
	I1014 13:55:34.810549   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.ffb9c796: {Name:mkf013e40a46367f5d473382a243ff918ed6f0f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:55:34.810679   25306 certs.go:381] copying /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.ffb9c796 -> /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt
	I1014 13:55:34.810843   25306 certs.go:385] copying /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.ffb9c796 -> /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key
	I1014 13:55:34.811031   25306 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key
	I1014 13:55:34.811055   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 13:55:34.811078   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 13:55:34.811100   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 13:55:34.811122   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 13:55:34.811141   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 13:55:34.811162   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 13:55:34.811184   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 13:55:34.811205   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 13:55:34.811281   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 13:55:34.811405   25306 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 13:55:34.811439   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 13:55:34.811482   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 13:55:34.811508   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 13:55:34.811530   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 13:55:34.811573   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 13:55:34.811602   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:55:34.811623   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem -> /usr/share/ca-certificates/15023.pem
	I1014 13:55:34.811635   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> /usr/share/ca-certificates/150232.pem
	I1014 13:55:34.811667   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:55:34.814657   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:55:34.815058   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:55:34.815083   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:55:34.815262   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:55:34.815417   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:55:34.815552   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:55:34.815647   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 13:55:34.891004   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1014 13:55:34.895702   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1014 13:55:34.906613   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1014 13:55:34.910438   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1014 13:55:34.923172   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1014 13:55:34.928434   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1014 13:55:34.941440   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1014 13:55:34.946469   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1014 13:55:34.957168   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1014 13:55:34.961259   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1014 13:55:34.972556   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1014 13:55:34.980332   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1014 13:55:34.991839   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 13:55:35.019053   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 13:55:35.043395   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 13:55:35.066158   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 13:55:35.088175   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1014 13:55:35.110925   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 13:55:35.134916   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 13:55:35.158129   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 13:55:35.180405   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 13:55:35.202548   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 13:55:35.225992   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 13:55:35.249981   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1014 13:55:35.266180   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1014 13:55:35.282687   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1014 13:55:35.299271   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1014 13:55:35.316623   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1014 13:55:35.332853   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1014 13:55:35.348570   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1014 13:55:35.364739   25306 ssh_runner.go:195] Run: openssl version
	I1014 13:55:35.370372   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 13:55:35.380736   25306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 13:55:35.385152   25306 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 13:55:35.385211   25306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 13:55:35.390839   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 13:55:35.401523   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 13:55:35.412185   25306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:55:35.416457   25306 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:55:35.416547   25306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:55:35.421940   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 13:55:35.432212   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 13:55:35.442100   25306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 13:55:35.446159   25306 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 13:55:35.446196   25306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 13:55:35.451427   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 13:55:35.461211   25306 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 13:55:35.465126   25306 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 13:55:35.465175   25306 kubeadm.go:934] updating node {m02 192.168.39.89 8443 v1.31.1 crio true true} ...
	I1014 13:55:35.465273   25306 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-450021-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 13:55:35.465315   25306 kube-vip.go:115] generating kube-vip config ...
	I1014 13:55:35.465353   25306 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1014 13:55:35.480860   25306 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1014 13:55:35.480912   25306 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1014 13:55:35.480953   25306 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 13:55:35.489708   25306 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1014 13:55:35.489755   25306 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1014 13:55:35.498478   25306 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1014 13:55:35.498498   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1014 13:55:35.498541   25306 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1014 13:55:35.498556   25306 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1014 13:55:35.498585   25306 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1014 13:55:35.502947   25306 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1014 13:55:35.502966   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1014 13:55:36.107052   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1014 13:55:36.107146   25306 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1014 13:55:36.112161   25306 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1014 13:55:36.112193   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1014 13:55:36.135646   25306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 13:55:36.156399   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1014 13:55:36.156509   25306 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1014 13:55:36.173587   25306 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1014 13:55:36.173634   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1014 13:55:36.629216   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1014 13:55:36.638544   25306 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1014 13:55:36.654373   25306 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 13:55:36.670100   25306 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1014 13:55:36.685420   25306 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1014 13:55:36.689062   25306 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 13:55:36.700413   25306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:55:36.822396   25306 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 13:55:36.840300   25306 host.go:66] Checking if "ha-450021" exists ...
	I1014 13:55:36.840777   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:55:36.840820   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:55:36.856367   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35101
	I1014 13:55:36.856879   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:55:36.857323   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:55:36.857351   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:55:36.857672   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:55:36.857841   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:55:36.857975   25306 start.go:317] joinCluster: &{Name:ha-450021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 13:55:36.858071   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1014 13:55:36.858091   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:55:36.860736   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:55:36.861146   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:55:36.861185   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:55:36.861337   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:55:36.861529   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:55:36.861694   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:55:36.861807   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 13:55:37.015771   25306 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 13:55:37.015819   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token n1vmb9.g7muq8my4o5hlpei --discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-450021-m02 --control-plane --apiserver-advertise-address=192.168.39.89 --apiserver-bind-port=8443"
	I1014 13:55:58.710606   25306 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token n1vmb9.g7muq8my4o5hlpei --discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-450021-m02 --control-plane --apiserver-advertise-address=192.168.39.89 --apiserver-bind-port=8443": (21.694741621s)
	I1014 13:55:58.710647   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1014 13:55:59.236903   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-450021-m02 minikube.k8s.io/updated_at=2024_10_14T13_55_59_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=ha-450021 minikube.k8s.io/primary=false
	I1014 13:55:59.350641   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-450021-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1014 13:55:59.452342   25306 start.go:319] duration metric: took 22.5943626s to joinCluster
	I1014 13:55:59.452418   25306 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 13:55:59.452735   25306 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:55:59.453925   25306 out.go:177] * Verifying Kubernetes components...
	I1014 13:55:59.454985   25306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:55:59.700035   25306 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 13:55:59.782880   25306 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 13:55:59.783215   25306 kapi.go:59] client config for ha-450021: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.crt", KeyFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.key", CAFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432aa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1014 13:55:59.783307   25306 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.176:8443
	I1014 13:55:59.783576   25306 node_ready.go:35] waiting up to 6m0s for node "ha-450021-m02" to be "Ready" ...
	I1014 13:55:59.783682   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:55:59.783696   25306 round_trippers.go:469] Request Headers:
	I1014 13:55:59.783707   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:55:59.783718   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:55:59.796335   25306 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1014 13:56:00.284246   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:00.284269   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:00.284281   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:00.284288   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:00.300499   25306 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I1014 13:56:00.784180   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:00.784204   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:00.784212   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:00.784217   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:00.811652   25306 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I1014 13:56:01.284849   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:01.284881   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:01.284893   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:01.284898   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:01.288918   25306 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 13:56:01.783917   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:01.783937   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:01.783945   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:01.783949   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:01.787799   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:01.788614   25306 node_ready.go:53] node "ha-450021-m02" has status "Ready":"False"
	I1014 13:56:02.284602   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:02.284624   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:02.284632   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:02.284642   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:02.290773   25306 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 13:56:02.783789   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:02.783815   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:02.783826   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:02.783831   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:02.788075   25306 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 13:56:03.284032   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:03.284057   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:03.284068   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:03.284074   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:03.287614   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:03.783925   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:03.783945   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:03.783953   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:03.783956   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:03.788205   25306 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 13:56:03.788893   25306 node_ready.go:53] node "ha-450021-m02" has status "Ready":"False"
	I1014 13:56:04.283968   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:04.283987   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:04.283995   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:04.283999   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:04.287325   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:04.784192   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:04.784212   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:04.784219   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:04.784225   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:04.787474   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:05.284787   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:05.284804   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:05.284813   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:05.284815   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:05.293558   25306 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1014 13:56:05.784473   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:05.784495   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:05.784505   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:05.784509   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:05.787964   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:06.283912   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:06.283936   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:06.283946   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:06.283954   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:06.286733   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:56:06.287200   25306 node_ready.go:53] node "ha-450021-m02" has status "Ready":"False"
	I1014 13:56:06.784670   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:06.784694   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:06.784706   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:06.784711   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:06.788422   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:07.283873   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:07.283901   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:07.283913   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:07.283918   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:07.286693   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:56:07.784588   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:07.784609   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:07.784617   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:07.784621   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:07.787856   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:08.284107   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:08.284126   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:08.284134   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:08.284138   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:08.287096   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:56:08.287719   25306 node_ready.go:53] node "ha-450021-m02" has status "Ready":"False"
	I1014 13:56:08.784096   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:08.784116   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:08.784124   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:08.784127   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:08.787645   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:09.284728   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:09.284752   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:09.284759   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:09.284764   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:09.288184   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:09.784057   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:09.784097   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:09.784108   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:09.784122   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:09.793007   25306 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1014 13:56:10.284378   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:10.284400   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:10.284408   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:10.284413   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:10.287852   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:10.288463   25306 node_ready.go:53] node "ha-450021-m02" has status "Ready":"False"
	I1014 13:56:10.783831   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:10.783850   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:10.783858   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:10.783862   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:10.787590   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:11.284759   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:11.284781   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:11.284790   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:11.284794   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:11.287610   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:56:11.784640   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:11.784659   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:11.784667   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:11.784672   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:11.787776   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:12.283968   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:12.283997   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:12.284009   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:12.284014   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:12.289974   25306 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 13:56:12.290779   25306 node_ready.go:53] node "ha-450021-m02" has status "Ready":"False"
	I1014 13:56:12.784021   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:12.784047   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:12.784061   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:12.784069   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:12.787917   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:13.283870   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:13.283893   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:13.283901   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:13.283905   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:13.287328   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:13.784620   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:13.784644   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:13.784653   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:13.784657   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:13.787810   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:14.283867   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:14.283892   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:14.283900   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:14.283905   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:14.287541   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:14.784419   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:14.784440   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:14.784447   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:14.784450   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:14.787853   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:14.788359   25306 node_ready.go:53] node "ha-450021-m02" has status "Ready":"False"
	I1014 13:56:15.284687   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:15.284709   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.284720   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.284726   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.287861   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:15.288461   25306 node_ready.go:49] node "ha-450021-m02" has status "Ready":"True"
	I1014 13:56:15.288480   25306 node_ready.go:38] duration metric: took 15.504881835s for node "ha-450021-m02" to be "Ready" ...
	I1014 13:56:15.288487   25306 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 13:56:15.288543   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods
	I1014 13:56:15.288553   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.288559   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.288563   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.292417   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:15.298105   25306 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-btfml" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.298175   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-btfml
	I1014 13:56:15.298182   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.298189   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.298194   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.300838   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:56:15.301679   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:15.301692   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.301699   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.301703   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.304037   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:56:15.304599   25306 pod_ready.go:93] pod "coredns-7c65d6cfc9-btfml" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:15.304614   25306 pod_ready.go:82] duration metric: took 6.489417ms for pod "coredns-7c65d6cfc9-btfml" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.304622   25306 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-h5s6h" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.304661   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-h5s6h
	I1014 13:56:15.304669   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.304683   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.304694   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.306880   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:56:15.307573   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:15.307590   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.307600   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.307610   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.309331   25306 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1014 13:56:15.309944   25306 pod_ready.go:93] pod "coredns-7c65d6cfc9-h5s6h" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:15.309963   25306 pod_ready.go:82] duration metric: took 5.334499ms for pod "coredns-7c65d6cfc9-h5s6h" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.309975   25306 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.310021   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/etcd-ha-450021
	I1014 13:56:15.310032   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.310044   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.310060   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.312281   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:56:15.312954   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:15.312972   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.312984   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.312989   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.314997   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:56:15.315561   25306 pod_ready.go:93] pod "etcd-ha-450021" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:15.315581   25306 pod_ready.go:82] duration metric: took 5.597491ms for pod "etcd-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.315592   25306 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.315648   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/etcd-ha-450021-m02
	I1014 13:56:15.315660   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.315671   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.315680   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.317496   25306 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1014 13:56:15.318188   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:15.318205   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.318217   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.318224   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.320143   25306 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1014 13:56:15.320663   25306 pod_ready.go:93] pod "etcd-ha-450021-m02" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:15.320681   25306 pod_ready.go:82] duration metric: took 5.077444ms for pod "etcd-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.320700   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.485053   25306 request.go:632] Waited for 164.298634ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450021
	I1014 13:56:15.485113   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450021
	I1014 13:56:15.485118   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.485126   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.485130   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.488373   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:15.685383   25306 request.go:632] Waited for 196.403765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:15.685451   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:15.685458   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.685469   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.685478   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.688990   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:15.689603   25306 pod_ready.go:93] pod "kube-apiserver-ha-450021" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:15.689627   25306 pod_ready.go:82] duration metric: took 368.913108ms for pod "kube-apiserver-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.689641   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.885558   25306 request.go:632] Waited for 195.846701ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450021-m02
	I1014 13:56:15.885605   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450021-m02
	I1014 13:56:15.885611   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.885618   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.885623   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.889124   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:16.084785   25306 request.go:632] Waited for 194.38123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:16.084840   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:16.084845   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:16.084853   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:16.084857   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:16.088301   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:16.088998   25306 pod_ready.go:93] pod "kube-apiserver-ha-450021-m02" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:16.089015   25306 pod_ready.go:82] duration metric: took 399.36552ms for pod "kube-apiserver-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:16.089025   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:16.285209   25306 request.go:632] Waited for 196.12444ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450021
	I1014 13:56:16.285293   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450021
	I1014 13:56:16.285302   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:16.285313   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:16.285319   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:16.289023   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:16.485127   25306 request.go:632] Waited for 195.353812ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:16.485198   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:16.485212   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:16.485224   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:16.485231   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:16.488483   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:16.489170   25306 pod_ready.go:93] pod "kube-controller-manager-ha-450021" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:16.489190   25306 pod_ready.go:82] duration metric: took 400.158231ms for pod "kube-controller-manager-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:16.489202   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:16.685336   25306 request.go:632] Waited for 196.062822ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450021-m02
	I1014 13:56:16.685418   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450021-m02
	I1014 13:56:16.685429   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:16.685440   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:16.685449   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:16.688757   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:16.884883   25306 request.go:632] Waited for 195.393841ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:16.884933   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:16.884937   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:16.884945   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:16.884950   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:16.888074   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:16.888564   25306 pod_ready.go:93] pod "kube-controller-manager-ha-450021-m02" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:16.888582   25306 pod_ready.go:82] duration metric: took 399.371713ms for pod "kube-controller-manager-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:16.888594   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dmbpv" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:17.084731   25306 request.go:632] Waited for 196.036159ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dmbpv
	I1014 13:56:17.084792   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dmbpv
	I1014 13:56:17.084799   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:17.084811   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:17.084818   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:17.088594   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:17.284774   25306 request.go:632] Waited for 195.293808ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:17.284866   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:17.284878   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:17.284889   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:17.284900   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:17.288050   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:17.288623   25306 pod_ready.go:93] pod "kube-proxy-dmbpv" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:17.288647   25306 pod_ready.go:82] duration metric: took 400.044261ms for pod "kube-proxy-dmbpv" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:17.288659   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-v24tf" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:17.485648   25306 request.go:632] Waited for 196.912408ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v24tf
	I1014 13:56:17.485723   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v24tf
	I1014 13:56:17.485734   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:17.485744   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:17.485752   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:17.488420   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:56:17.685402   25306 request.go:632] Waited for 196.37897ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:17.685455   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:17.685460   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:17.685467   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:17.685471   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:17.689419   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:17.690366   25306 pod_ready.go:93] pod "kube-proxy-v24tf" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:17.690386   25306 pod_ready.go:82] duration metric: took 401.717488ms for pod "kube-proxy-v24tf" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:17.690395   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:17.885498   25306 request.go:632] Waited for 195.043697ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450021
	I1014 13:56:17.885563   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450021
	I1014 13:56:17.885569   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:17.885576   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:17.885581   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:17.888648   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:18.085570   25306 request.go:632] Waited for 196.366356ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:18.085639   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:18.085649   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:18.085660   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:18.085668   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:18.088834   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:18.089495   25306 pod_ready.go:93] pod "kube-scheduler-ha-450021" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:18.089519   25306 pod_ready.go:82] duration metric: took 399.116695ms for pod "kube-scheduler-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:18.089532   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:18.285606   25306 request.go:632] Waited for 196.011378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450021-m02
	I1014 13:56:18.285677   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450021-m02
	I1014 13:56:18.285685   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:18.285693   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:18.285699   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:18.288947   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:18.484902   25306 request.go:632] Waited for 195.327209ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:18.484963   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:18.484970   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:18.484981   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:18.484989   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:18.488080   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:18.488592   25306 pod_ready.go:93] pod "kube-scheduler-ha-450021-m02" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:18.488612   25306 pod_ready.go:82] duration metric: took 399.071687ms for pod "kube-scheduler-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:18.488628   25306 pod_ready.go:39] duration metric: took 3.200130009s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 13:56:18.488645   25306 api_server.go:52] waiting for apiserver process to appear ...
	I1014 13:56:18.488706   25306 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 13:56:18.504222   25306 api_server.go:72] duration metric: took 19.051768004s to wait for apiserver process to appear ...
	I1014 13:56:18.504252   25306 api_server.go:88] waiting for apiserver healthz status ...
	I1014 13:56:18.504274   25306 api_server.go:253] Checking apiserver healthz at https://192.168.39.176:8443/healthz ...
	I1014 13:56:18.508419   25306 api_server.go:279] https://192.168.39.176:8443/healthz returned 200:
	ok
	I1014 13:56:18.508480   25306 round_trippers.go:463] GET https://192.168.39.176:8443/version
	I1014 13:56:18.508494   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:18.508504   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:18.508511   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:18.509353   25306 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1014 13:56:18.509470   25306 api_server.go:141] control plane version: v1.31.1
	I1014 13:56:18.509489   25306 api_server.go:131] duration metric: took 5.230064ms to wait for apiserver health ...
	I1014 13:56:18.509499   25306 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 13:56:18.684863   25306 request.go:632] Waited for 175.279951ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods
	I1014 13:56:18.684960   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods
	I1014 13:56:18.684974   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:18.684985   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:18.684994   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:18.691157   25306 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 13:56:18.697135   25306 system_pods.go:59] 17 kube-system pods found
	I1014 13:56:18.697234   25306 system_pods.go:61] "coredns-7c65d6cfc9-btfml" [292e08ef-5eec-4ebb-acf5-5b4b03e47724] Running
	I1014 13:56:18.697252   25306 system_pods.go:61] "coredns-7c65d6cfc9-h5s6h" [bf78614c-8f22-48f9-8a16-cfcffecadfcc] Running
	I1014 13:56:18.697264   25306 system_pods.go:61] "etcd-ha-450021" [d3e4a252-6d4a-4617-99f8-416ddaa8e695] Running
	I1014 13:56:18.697271   25306 system_pods.go:61] "etcd-ha-450021-m02" [d890c5b4-c756-42a4-a549-59b46d9fa0f6] Running
	I1014 13:56:18.697279   25306 system_pods.go:61] "kindnet-2ghzc" [f725a811-6a0e-433c-913d-079b7bc4742f] Running
	I1014 13:56:18.697284   25306 system_pods.go:61] "kindnet-c2xkn" [0f821123-80f9-4fe5-b64c-fb641ec185ea] Running
	I1014 13:56:18.697290   25306 system_pods.go:61] "kube-apiserver-ha-450021" [3c355a29-9ac5-466a-974f-22fc58429b98] Running
	I1014 13:56:18.697299   25306 system_pods.go:61] "kube-apiserver-ha-450021-m02" [5e9f016e-2b42-4301-964f-8e2af49d0d08] Running
	I1014 13:56:18.697305   25306 system_pods.go:61] "kube-controller-manager-ha-450021" [b002ddcb-0bb2-44f5-a779-20df99f3cab5] Running
	I1014 13:56:18.697314   25306 system_pods.go:61] "kube-controller-manager-ha-450021-m02" [f7be35b1-380c-4f40-a1d6-5522b961917c] Running
	I1014 13:56:18.697319   25306 system_pods.go:61] "kube-proxy-dmbpv" [e09737a1-c663-4951-b6cb-c0690fbd8153] Running
	I1014 13:56:18.697328   25306 system_pods.go:61] "kube-proxy-v24tf" [49b626fc-4017-45f7-a44f-43f3b311d0e0] Running
	I1014 13:56:18.697334   25306 system_pods.go:61] "kube-scheduler-ha-450021" [2f216272-b604-4f1c-ad4b-fdb874a78cf4] Running
	I1014 13:56:18.697340   25306 system_pods.go:61] "kube-scheduler-ha-450021-m02" [cfa4bb4e-6a32-4b4b-85df-2c7b1a356a4a] Running
	I1014 13:56:18.697345   25306 system_pods.go:61] "kube-vip-ha-450021" [e5340482-7ea5-4299-8096-a2f292c4bfdd] Running
	I1014 13:56:18.697350   25306 system_pods.go:61] "kube-vip-ha-450021-m02" [6a409d8d-9566-4caa-af5a-0dbe7b9f6cec] Running
	I1014 13:56:18.697356   25306 system_pods.go:61] "storage-provisioner" [1377adb3-3faf-4dee-a86e-9c4544a02d51] Running
	I1014 13:56:18.697364   25306 system_pods.go:74] duration metric: took 187.854432ms to wait for pod list to return data ...
	I1014 13:56:18.697375   25306 default_sa.go:34] waiting for default service account to be created ...
	I1014 13:56:18.884741   25306 request.go:632] Waited for 187.279644ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/default/serviceaccounts
	I1014 13:56:18.884797   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/default/serviceaccounts
	I1014 13:56:18.884802   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:18.884809   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:18.884813   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:18.888582   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:18.888812   25306 default_sa.go:45] found service account: "default"
	I1014 13:56:18.888830   25306 default_sa.go:55] duration metric: took 191.448571ms for default service account to be created ...
	I1014 13:56:18.888841   25306 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 13:56:19.085294   25306 request.go:632] Waited for 196.363765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods
	I1014 13:56:19.085358   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods
	I1014 13:56:19.085366   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:19.085377   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:19.085383   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:19.092864   25306 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1014 13:56:19.097323   25306 system_pods.go:86] 17 kube-system pods found
	I1014 13:56:19.097351   25306 system_pods.go:89] "coredns-7c65d6cfc9-btfml" [292e08ef-5eec-4ebb-acf5-5b4b03e47724] Running
	I1014 13:56:19.097357   25306 system_pods.go:89] "coredns-7c65d6cfc9-h5s6h" [bf78614c-8f22-48f9-8a16-cfcffecadfcc] Running
	I1014 13:56:19.097362   25306 system_pods.go:89] "etcd-ha-450021" [d3e4a252-6d4a-4617-99f8-416ddaa8e695] Running
	I1014 13:56:19.097366   25306 system_pods.go:89] "etcd-ha-450021-m02" [d890c5b4-c756-42a4-a549-59b46d9fa0f6] Running
	I1014 13:56:19.097370   25306 system_pods.go:89] "kindnet-2ghzc" [f725a811-6a0e-433c-913d-079b7bc4742f] Running
	I1014 13:56:19.097374   25306 system_pods.go:89] "kindnet-c2xkn" [0f821123-80f9-4fe5-b64c-fb641ec185ea] Running
	I1014 13:56:19.097377   25306 system_pods.go:89] "kube-apiserver-ha-450021" [3c355a29-9ac5-466a-974f-22fc58429b98] Running
	I1014 13:56:19.097382   25306 system_pods.go:89] "kube-apiserver-ha-450021-m02" [5e9f016e-2b42-4301-964f-8e2af49d0d08] Running
	I1014 13:56:19.097387   25306 system_pods.go:89] "kube-controller-manager-ha-450021" [b002ddcb-0bb2-44f5-a779-20df99f3cab5] Running
	I1014 13:56:19.097390   25306 system_pods.go:89] "kube-controller-manager-ha-450021-m02" [f7be35b1-380c-4f40-a1d6-5522b961917c] Running
	I1014 13:56:19.097394   25306 system_pods.go:89] "kube-proxy-dmbpv" [e09737a1-c663-4951-b6cb-c0690fbd8153] Running
	I1014 13:56:19.097398   25306 system_pods.go:89] "kube-proxy-v24tf" [49b626fc-4017-45f7-a44f-43f3b311d0e0] Running
	I1014 13:56:19.097402   25306 system_pods.go:89] "kube-scheduler-ha-450021" [2f216272-b604-4f1c-ad4b-fdb874a78cf4] Running
	I1014 13:56:19.097411   25306 system_pods.go:89] "kube-scheduler-ha-450021-m02" [cfa4bb4e-6a32-4b4b-85df-2c7b1a356a4a] Running
	I1014 13:56:19.097417   25306 system_pods.go:89] "kube-vip-ha-450021" [e5340482-7ea5-4299-8096-a2f292c4bfdd] Running
	I1014 13:56:19.097420   25306 system_pods.go:89] "kube-vip-ha-450021-m02" [6a409d8d-9566-4caa-af5a-0dbe7b9f6cec] Running
	I1014 13:56:19.097423   25306 system_pods.go:89] "storage-provisioner" [1377adb3-3faf-4dee-a86e-9c4544a02d51] Running
	I1014 13:56:19.097429   25306 system_pods.go:126] duration metric: took 208.581366ms to wait for k8s-apps to be running ...
	I1014 13:56:19.097436   25306 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 13:56:19.097477   25306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 13:56:19.112071   25306 system_svc.go:56] duration metric: took 14.628482ms WaitForService to wait for kubelet
	I1014 13:56:19.112097   25306 kubeadm.go:582] duration metric: took 19.659648051s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 13:56:19.112113   25306 node_conditions.go:102] verifying NodePressure condition ...
	I1014 13:56:19.285537   25306 request.go:632] Waited for 173.355083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes
	I1014 13:56:19.285629   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes
	I1014 13:56:19.285637   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:19.285649   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:19.285654   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:19.289726   25306 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 13:56:19.290673   25306 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 13:56:19.290698   25306 node_conditions.go:123] node cpu capacity is 2
	I1014 13:56:19.290712   25306 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 13:56:19.290717   25306 node_conditions.go:123] node cpu capacity is 2
	I1014 13:56:19.290723   25306 node_conditions.go:105] duration metric: took 178.605419ms to run NodePressure ...
	I1014 13:56:19.290740   25306 start.go:241] waiting for startup goroutines ...
	I1014 13:56:19.290784   25306 start.go:255] writing updated cluster config ...
	I1014 13:56:19.292978   25306 out.go:201] 
	I1014 13:56:19.294410   25306 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:56:19.294496   25306 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/config.json ...
	I1014 13:56:19.296041   25306 out.go:177] * Starting "ha-450021-m03" control-plane node in "ha-450021" cluster
	I1014 13:56:19.297096   25306 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 13:56:19.297116   25306 cache.go:56] Caching tarball of preloaded images
	I1014 13:56:19.297204   25306 preload.go:172] Found /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 13:56:19.297214   25306 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1014 13:56:19.297292   25306 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/config.json ...
	I1014 13:56:19.297485   25306 start.go:360] acquireMachinesLock for ha-450021-m03: {Name:mk0b928e3a7c606d1470b2064477a22c5e59969d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 13:56:19.297521   25306 start.go:364] duration metric: took 20.106µs to acquireMachinesLock for "ha-450021-m03"
	I1014 13:56:19.297537   25306 start.go:93] Provisioning new machine with config: &{Name:ha-450021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 13:56:19.297616   25306 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1014 13:56:19.299122   25306 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 13:56:19.299222   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:56:19.299255   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:56:19.313918   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33835
	I1014 13:56:19.314305   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:56:19.314837   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:56:19.314851   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:56:19.315181   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:56:19.315347   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetMachineName
	I1014 13:56:19.315509   25306 main.go:141] libmachine: (ha-450021-m03) Calling .DriverName
	I1014 13:56:19.315639   25306 start.go:159] libmachine.API.Create for "ha-450021" (driver="kvm2")
	I1014 13:56:19.315670   25306 client.go:168] LocalClient.Create starting
	I1014 13:56:19.315704   25306 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem
	I1014 13:56:19.315748   25306 main.go:141] libmachine: Decoding PEM data...
	I1014 13:56:19.315768   25306 main.go:141] libmachine: Parsing certificate...
	I1014 13:56:19.315834   25306 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem
	I1014 13:56:19.315859   25306 main.go:141] libmachine: Decoding PEM data...
	I1014 13:56:19.315870   25306 main.go:141] libmachine: Parsing certificate...
	I1014 13:56:19.315884   25306 main.go:141] libmachine: Running pre-create checks...
	I1014 13:56:19.315892   25306 main.go:141] libmachine: (ha-450021-m03) Calling .PreCreateCheck
	I1014 13:56:19.316068   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetConfigRaw
	I1014 13:56:19.316425   25306 main.go:141] libmachine: Creating machine...
	I1014 13:56:19.316438   25306 main.go:141] libmachine: (ha-450021-m03) Calling .Create
	I1014 13:56:19.316586   25306 main.go:141] libmachine: (ha-450021-m03) Creating KVM machine...
	I1014 13:56:19.317686   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found existing default KVM network
	I1014 13:56:19.317799   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found existing private KVM network mk-ha-450021
	I1014 13:56:19.317961   25306 main.go:141] libmachine: (ha-450021-m03) Setting up store path in /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03 ...
	I1014 13:56:19.317988   25306 main.go:141] libmachine: (ha-450021-m03) Building disk image from file:///home/jenkins/minikube-integration/19790-7836/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1014 13:56:19.318035   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:19.317950   26053 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 13:56:19.318138   25306 main.go:141] libmachine: (ha-450021-m03) Downloading /home/jenkins/minikube-integration/19790-7836/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19790-7836/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1014 13:56:19.552577   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:19.552461   26053 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/id_rsa...
	I1014 13:56:19.731749   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:19.731620   26053 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/ha-450021-m03.rawdisk...
	I1014 13:56:19.731783   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Writing magic tar header
	I1014 13:56:19.731797   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Writing SSH key tar header
	I1014 13:56:19.731814   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:19.731727   26053 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03 ...
	I1014 13:56:19.731831   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03
	I1014 13:56:19.731859   25306 main.go:141] libmachine: (ha-450021-m03) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03 (perms=drwx------)
	I1014 13:56:19.731873   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube/machines
	I1014 13:56:19.731885   25306 main.go:141] libmachine: (ha-450021-m03) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube/machines (perms=drwxr-xr-x)
	I1014 13:56:19.731899   25306 main.go:141] libmachine: (ha-450021-m03) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube (perms=drwxr-xr-x)
	I1014 13:56:19.731913   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 13:56:19.731942   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836
	I1014 13:56:19.731955   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1014 13:56:19.731964   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Checking permissions on dir: /home/jenkins
	I1014 13:56:19.731973   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Checking permissions on dir: /home
	I1014 13:56:19.731985   25306 main.go:141] libmachine: (ha-450021-m03) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836 (perms=drwxrwxr-x)
	I1014 13:56:19.732001   25306 main.go:141] libmachine: (ha-450021-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1014 13:56:19.732012   25306 main.go:141] libmachine: (ha-450021-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1014 13:56:19.732026   25306 main.go:141] libmachine: (ha-450021-m03) Creating domain...
	I1014 13:56:19.732040   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Skipping /home - not owner
	I1014 13:56:19.732949   25306 main.go:141] libmachine: (ha-450021-m03) define libvirt domain using xml: 
	I1014 13:56:19.732973   25306 main.go:141] libmachine: (ha-450021-m03) <domain type='kvm'>
	I1014 13:56:19.732984   25306 main.go:141] libmachine: (ha-450021-m03)   <name>ha-450021-m03</name>
	I1014 13:56:19.732992   25306 main.go:141] libmachine: (ha-450021-m03)   <memory unit='MiB'>2200</memory>
	I1014 13:56:19.733004   25306 main.go:141] libmachine: (ha-450021-m03)   <vcpu>2</vcpu>
	I1014 13:56:19.733014   25306 main.go:141] libmachine: (ha-450021-m03)   <features>
	I1014 13:56:19.733021   25306 main.go:141] libmachine: (ha-450021-m03)     <acpi/>
	I1014 13:56:19.733031   25306 main.go:141] libmachine: (ha-450021-m03)     <apic/>
	I1014 13:56:19.733038   25306 main.go:141] libmachine: (ha-450021-m03)     <pae/>
	I1014 13:56:19.733044   25306 main.go:141] libmachine: (ha-450021-m03)     
	I1014 13:56:19.733056   25306 main.go:141] libmachine: (ha-450021-m03)   </features>
	I1014 13:56:19.733071   25306 main.go:141] libmachine: (ha-450021-m03)   <cpu mode='host-passthrough'>
	I1014 13:56:19.733081   25306 main.go:141] libmachine: (ha-450021-m03)   
	I1014 13:56:19.733089   25306 main.go:141] libmachine: (ha-450021-m03)   </cpu>
	I1014 13:56:19.733099   25306 main.go:141] libmachine: (ha-450021-m03)   <os>
	I1014 13:56:19.733106   25306 main.go:141] libmachine: (ha-450021-m03)     <type>hvm</type>
	I1014 13:56:19.733117   25306 main.go:141] libmachine: (ha-450021-m03)     <boot dev='cdrom'/>
	I1014 13:56:19.733126   25306 main.go:141] libmachine: (ha-450021-m03)     <boot dev='hd'/>
	I1014 13:56:19.733136   25306 main.go:141] libmachine: (ha-450021-m03)     <bootmenu enable='no'/>
	I1014 13:56:19.733151   25306 main.go:141] libmachine: (ha-450021-m03)   </os>
	I1014 13:56:19.733160   25306 main.go:141] libmachine: (ha-450021-m03)   <devices>
	I1014 13:56:19.733169   25306 main.go:141] libmachine: (ha-450021-m03)     <disk type='file' device='cdrom'>
	I1014 13:56:19.733183   25306 main.go:141] libmachine: (ha-450021-m03)       <source file='/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/boot2docker.iso'/>
	I1014 13:56:19.733196   25306 main.go:141] libmachine: (ha-450021-m03)       <target dev='hdc' bus='scsi'/>
	I1014 13:56:19.733209   25306 main.go:141] libmachine: (ha-450021-m03)       <readonly/>
	I1014 13:56:19.733218   25306 main.go:141] libmachine: (ha-450021-m03)     </disk>
	I1014 13:56:19.733227   25306 main.go:141] libmachine: (ha-450021-m03)     <disk type='file' device='disk'>
	I1014 13:56:19.733239   25306 main.go:141] libmachine: (ha-450021-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1014 13:56:19.733252   25306 main.go:141] libmachine: (ha-450021-m03)       <source file='/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/ha-450021-m03.rawdisk'/>
	I1014 13:56:19.733266   25306 main.go:141] libmachine: (ha-450021-m03)       <target dev='hda' bus='virtio'/>
	I1014 13:56:19.733278   25306 main.go:141] libmachine: (ha-450021-m03)     </disk>
	I1014 13:56:19.733286   25306 main.go:141] libmachine: (ha-450021-m03)     <interface type='network'>
	I1014 13:56:19.733298   25306 main.go:141] libmachine: (ha-450021-m03)       <source network='mk-ha-450021'/>
	I1014 13:56:19.733306   25306 main.go:141] libmachine: (ha-450021-m03)       <model type='virtio'/>
	I1014 13:56:19.733315   25306 main.go:141] libmachine: (ha-450021-m03)     </interface>
	I1014 13:56:19.733325   25306 main.go:141] libmachine: (ha-450021-m03)     <interface type='network'>
	I1014 13:56:19.733356   25306 main.go:141] libmachine: (ha-450021-m03)       <source network='default'/>
	I1014 13:56:19.733373   25306 main.go:141] libmachine: (ha-450021-m03)       <model type='virtio'/>
	I1014 13:56:19.733379   25306 main.go:141] libmachine: (ha-450021-m03)     </interface>
	I1014 13:56:19.733383   25306 main.go:141] libmachine: (ha-450021-m03)     <serial type='pty'>
	I1014 13:56:19.733387   25306 main.go:141] libmachine: (ha-450021-m03)       <target port='0'/>
	I1014 13:56:19.733394   25306 main.go:141] libmachine: (ha-450021-m03)     </serial>
	I1014 13:56:19.733399   25306 main.go:141] libmachine: (ha-450021-m03)     <console type='pty'>
	I1014 13:56:19.733403   25306 main.go:141] libmachine: (ha-450021-m03)       <target type='serial' port='0'/>
	I1014 13:56:19.733410   25306 main.go:141] libmachine: (ha-450021-m03)     </console>
	I1014 13:56:19.733415   25306 main.go:141] libmachine: (ha-450021-m03)     <rng model='virtio'>
	I1014 13:56:19.733430   25306 main.go:141] libmachine: (ha-450021-m03)       <backend model='random'>/dev/random</backend>
	I1014 13:56:19.733436   25306 main.go:141] libmachine: (ha-450021-m03)     </rng>
	I1014 13:56:19.733441   25306 main.go:141] libmachine: (ha-450021-m03)     
	I1014 13:56:19.733445   25306 main.go:141] libmachine: (ha-450021-m03)     
	I1014 13:56:19.733449   25306 main.go:141] libmachine: (ha-450021-m03)   </devices>
	I1014 13:56:19.733455   25306 main.go:141] libmachine: (ha-450021-m03) </domain>
	I1014 13:56:19.733462   25306 main.go:141] libmachine: (ha-450021-m03) 
	I1014 13:56:19.740127   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:3e:d5:3c in network default
	I1014 13:56:19.740688   25306 main.go:141] libmachine: (ha-450021-m03) Ensuring networks are active...
	I1014 13:56:19.740710   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:19.741382   25306 main.go:141] libmachine: (ha-450021-m03) Ensuring network default is active
	I1014 13:56:19.741753   25306 main.go:141] libmachine: (ha-450021-m03) Ensuring network mk-ha-450021 is active
	I1014 13:56:19.742099   25306 main.go:141] libmachine: (ha-450021-m03) Getting domain xml...
	I1014 13:56:19.742834   25306 main.go:141] libmachine: (ha-450021-m03) Creating domain...
	I1014 13:56:21.010084   25306 main.go:141] libmachine: (ha-450021-m03) Waiting to get IP...
	I1014 13:56:21.010944   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:21.011316   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:21.011377   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:21.011315   26053 retry.go:31] will retry after 306.133794ms: waiting for machine to come up
	I1014 13:56:21.318826   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:21.319333   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:21.319361   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:21.319280   26053 retry.go:31] will retry after 366.66223ms: waiting for machine to come up
	I1014 13:56:21.687816   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:21.688312   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:21.688353   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:21.688274   26053 retry.go:31] will retry after 390.93754ms: waiting for machine to come up
	I1014 13:56:22.080797   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:22.081263   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:22.081290   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:22.081223   26053 retry.go:31] will retry after 398.805239ms: waiting for machine to come up
	I1014 13:56:22.481851   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:22.482319   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:22.482343   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:22.482287   26053 retry.go:31] will retry after 640.042779ms: waiting for machine to come up
	I1014 13:56:23.123714   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:23.124086   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:23.124144   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:23.124073   26053 retry.go:31] will retry after 920.9874ms: waiting for machine to come up
	I1014 13:56:24.047070   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:24.047392   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:24.047414   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:24.047351   26053 retry.go:31] will retry after 897.422021ms: waiting for machine to come up
	I1014 13:56:24.946948   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:24.947347   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:24.947372   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:24.947310   26053 retry.go:31] will retry after 1.40276044s: waiting for machine to come up
	I1014 13:56:26.351855   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:26.352313   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:26.352340   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:26.352279   26053 retry.go:31] will retry after 1.726907493s: waiting for machine to come up
	I1014 13:56:28.080396   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:28.080846   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:28.080875   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:28.080790   26053 retry.go:31] will retry after 1.482180268s: waiting for machine to come up
	I1014 13:56:29.564825   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:29.565318   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:29.565340   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:29.565288   26053 retry.go:31] will retry after 2.541525756s: waiting for machine to come up
	I1014 13:56:32.109990   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:32.110440   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:32.110469   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:32.110395   26053 retry.go:31] will retry after 2.914830343s: waiting for machine to come up
	I1014 13:56:35.026789   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:35.027206   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:35.027240   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:35.027152   26053 retry.go:31] will retry after 3.572900713s: waiting for machine to come up
	I1014 13:56:38.603496   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:38.603914   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:38.603943   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:38.603867   26053 retry.go:31] will retry after 3.566960315s: waiting for machine to come up
	I1014 13:56:42.173796   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:42.174271   25306 main.go:141] libmachine: (ha-450021-m03) Found IP for machine: 192.168.39.55
	I1014 13:56:42.174288   25306 main.go:141] libmachine: (ha-450021-m03) Reserving static IP address...
	I1014 13:56:42.174301   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has current primary IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:42.174679   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find host DHCP lease matching {name: "ha-450021-m03", mac: "52:54:00:af:04:2c", ip: "192.168.39.55"} in network mk-ha-450021
	I1014 13:56:42.249586   25306 main.go:141] libmachine: (ha-450021-m03) Reserved static IP address: 192.168.39.55
	I1014 13:56:42.249623   25306 main.go:141] libmachine: (ha-450021-m03) Waiting for SSH to be available...
	I1014 13:56:42.249632   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Getting to WaitForSSH function...
	I1014 13:56:42.252725   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:42.253185   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021
	I1014 13:56:42.253208   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find defined IP address of network mk-ha-450021 interface with MAC address 52:54:00:af:04:2c
	I1014 13:56:42.253434   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Using SSH client type: external
	I1014 13:56:42.253458   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/id_rsa (-rw-------)
	I1014 13:56:42.253486   25306 main.go:141] libmachine: (ha-450021-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 13:56:42.253504   25306 main.go:141] libmachine: (ha-450021-m03) DBG | About to run SSH command:
	I1014 13:56:42.253518   25306 main.go:141] libmachine: (ha-450021-m03) DBG | exit 0
	I1014 13:56:42.256978   25306 main.go:141] libmachine: (ha-450021-m03) DBG | SSH cmd err, output: exit status 255: 
	I1014 13:56:42.256996   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1014 13:56:42.257003   25306 main.go:141] libmachine: (ha-450021-m03) DBG | command : exit 0
	I1014 13:56:42.257008   25306 main.go:141] libmachine: (ha-450021-m03) DBG | err     : exit status 255
	I1014 13:56:42.257014   25306 main.go:141] libmachine: (ha-450021-m03) DBG | output  : 
	I1014 13:56:45.257522   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Getting to WaitForSSH function...
	I1014 13:56:45.260212   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.260696   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:45.260726   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.260786   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Using SSH client type: external
	I1014 13:56:45.260815   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/id_rsa (-rw-------)
	I1014 13:56:45.260836   25306 main.go:141] libmachine: (ha-450021-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.55 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 13:56:45.260845   25306 main.go:141] libmachine: (ha-450021-m03) DBG | About to run SSH command:
	I1014 13:56:45.260853   25306 main.go:141] libmachine: (ha-450021-m03) DBG | exit 0
	I1014 13:56:45.382585   25306 main.go:141] libmachine: (ha-450021-m03) DBG | SSH cmd err, output: <nil>: 
	I1014 13:56:45.382879   25306 main.go:141] libmachine: (ha-450021-m03) KVM machine creation complete!
	I1014 13:56:45.383199   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetConfigRaw
	I1014 13:56:45.383711   25306 main.go:141] libmachine: (ha-450021-m03) Calling .DriverName
	I1014 13:56:45.383880   25306 main.go:141] libmachine: (ha-450021-m03) Calling .DriverName
	I1014 13:56:45.384004   25306 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1014 13:56:45.384014   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetState
	I1014 13:56:45.385264   25306 main.go:141] libmachine: Detecting operating system of created instance...
	I1014 13:56:45.385276   25306 main.go:141] libmachine: Waiting for SSH to be available...
	I1014 13:56:45.385281   25306 main.go:141] libmachine: Getting to WaitForSSH function...
	I1014 13:56:45.385287   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	I1014 13:56:45.387787   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.388084   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:45.388108   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.388291   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHPort
	I1014 13:56:45.388456   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:45.388593   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:45.388714   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHUsername
	I1014 13:56:45.388830   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:56:45.389029   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I1014 13:56:45.389040   25306 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1014 13:56:45.485735   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 13:56:45.485758   25306 main.go:141] libmachine: Detecting the provisioner...
	I1014 13:56:45.485768   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	I1014 13:56:45.488882   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.489166   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:45.489189   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.489303   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHPort
	I1014 13:56:45.489486   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:45.489610   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:45.489751   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHUsername
	I1014 13:56:45.489875   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:56:45.490046   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I1014 13:56:45.490060   25306 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1014 13:56:45.587324   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1014 13:56:45.587394   25306 main.go:141] libmachine: found compatible host: buildroot
	I1014 13:56:45.587407   25306 main.go:141] libmachine: Provisioning with buildroot...
	I1014 13:56:45.587422   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetMachineName
	I1014 13:56:45.587668   25306 buildroot.go:166] provisioning hostname "ha-450021-m03"
	I1014 13:56:45.587694   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetMachineName
	I1014 13:56:45.587891   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	I1014 13:56:45.589987   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.590329   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:45.590355   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.590484   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHPort
	I1014 13:56:45.590650   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:45.590770   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:45.590887   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHUsername
	I1014 13:56:45.591045   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:56:45.591197   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I1014 13:56:45.591208   25306 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-450021-m03 && echo "ha-450021-m03" | sudo tee /etc/hostname
	I1014 13:56:45.708548   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-450021-m03
	
	I1014 13:56:45.708578   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	I1014 13:56:45.711602   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.711972   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:45.711996   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.712173   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHPort
	I1014 13:56:45.712328   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:45.712490   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:45.712610   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHUsername
	I1014 13:56:45.712744   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:56:45.712915   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I1014 13:56:45.712938   25306 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-450021-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-450021-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-450021-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 13:56:45.819779   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 13:56:45.819813   25306 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 13:56:45.819833   25306 buildroot.go:174] setting up certificates
	I1014 13:56:45.819844   25306 provision.go:84] configureAuth start
	I1014 13:56:45.819857   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetMachineName
	I1014 13:56:45.820154   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetIP
	I1014 13:56:45.823118   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.823460   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:45.823487   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.823678   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	I1014 13:56:45.825593   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.825969   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:45.826000   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.826082   25306 provision.go:143] copyHostCerts
	I1014 13:56:45.826120   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 13:56:45.826162   25306 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 13:56:45.826174   25306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 13:56:45.826256   25306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 13:56:45.826387   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 13:56:45.826414   25306 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 13:56:45.826422   25306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 13:56:45.826470   25306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 13:56:45.826533   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 13:56:45.826559   25306 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 13:56:45.826567   25306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 13:56:45.826616   25306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 13:56:45.826689   25306 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.ha-450021-m03 san=[127.0.0.1 192.168.39.55 ha-450021-m03 localhost minikube]
	I1014 13:56:45.954899   25306 provision.go:177] copyRemoteCerts
	I1014 13:56:45.954971   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 13:56:45.955000   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	I1014 13:56:45.957506   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.957791   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:45.957818   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.957960   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHPort
	I1014 13:56:45.958125   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:45.958305   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHUsername
	I1014 13:56:45.958436   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/id_rsa Username:docker}
	I1014 13:56:46.036842   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 13:56:46.036916   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 13:56:46.062450   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 13:56:46.062515   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1014 13:56:46.086853   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 13:56:46.086926   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 13:56:46.115352   25306 provision.go:87] duration metric: took 295.495227ms to configureAuth
	I1014 13:56:46.115379   25306 buildroot.go:189] setting minikube options for container-runtime
	I1014 13:56:46.115621   25306 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:56:46.115716   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	I1014 13:56:46.118262   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.118631   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:46.118656   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.118842   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHPort
	I1014 13:56:46.119017   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:46.119154   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:46.119286   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHUsername
	I1014 13:56:46.119431   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:56:46.119582   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I1014 13:56:46.119596   25306 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 13:56:46.343295   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 13:56:46.343323   25306 main.go:141] libmachine: Checking connection to Docker...
	I1014 13:56:46.343334   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetURL
	I1014 13:56:46.344763   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Using libvirt version 6000000
	I1014 13:56:46.346964   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.347332   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:46.347354   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.347553   25306 main.go:141] libmachine: Docker is up and running!
	I1014 13:56:46.347568   25306 main.go:141] libmachine: Reticulating splines...
	I1014 13:56:46.347575   25306 client.go:171] duration metric: took 27.031894224s to LocalClient.Create
	I1014 13:56:46.347595   25306 start.go:167] duration metric: took 27.031958272s to libmachine.API.Create "ha-450021"
	I1014 13:56:46.347605   25306 start.go:293] postStartSetup for "ha-450021-m03" (driver="kvm2")
	I1014 13:56:46.347614   25306 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 13:56:46.347629   25306 main.go:141] libmachine: (ha-450021-m03) Calling .DriverName
	I1014 13:56:46.347825   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 13:56:46.347855   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	I1014 13:56:46.350344   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.350734   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:46.350754   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.350907   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHPort
	I1014 13:56:46.351098   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:46.351237   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHUsername
	I1014 13:56:46.351388   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/id_rsa Username:docker}
	I1014 13:56:46.433896   25306 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 13:56:46.438009   25306 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 13:56:46.438030   25306 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 13:56:46.438090   25306 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 13:56:46.438161   25306 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 13:56:46.438171   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> /etc/ssl/certs/150232.pem
	I1014 13:56:46.438246   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 13:56:46.448052   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 13:56:46.472253   25306 start.go:296] duration metric: took 124.635752ms for postStartSetup
	I1014 13:56:46.472307   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetConfigRaw
	I1014 13:56:46.472896   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetIP
	I1014 13:56:46.475688   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.476037   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:46.476063   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.476352   25306 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/config.json ...
	I1014 13:56:46.476544   25306 start.go:128] duration metric: took 27.178917299s to createHost
	I1014 13:56:46.476567   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	I1014 13:56:46.478884   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.479221   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:46.479251   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.479355   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHPort
	I1014 13:56:46.479528   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:46.479638   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:46.479747   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHUsername
	I1014 13:56:46.479874   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:56:46.480025   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I1014 13:56:46.480035   25306 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 13:56:46.583399   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728914206.561472302
	
	I1014 13:56:46.583425   25306 fix.go:216] guest clock: 1728914206.561472302
	I1014 13:56:46.583435   25306 fix.go:229] Guest: 2024-10-14 13:56:46.561472302 +0000 UTC Remote: 2024-10-14 13:56:46.476556325 +0000 UTC m=+146.700269378 (delta=84.915977ms)
	I1014 13:56:46.583455   25306 fix.go:200] guest clock delta is within tolerance: 84.915977ms
	I1014 13:56:46.583460   25306 start.go:83] releasing machines lock for "ha-450021-m03", held for 27.285931106s
	I1014 13:56:46.583477   25306 main.go:141] libmachine: (ha-450021-m03) Calling .DriverName
	I1014 13:56:46.583714   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetIP
	I1014 13:56:46.586281   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.586554   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:46.586578   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.589268   25306 out.go:177] * Found network options:
	I1014 13:56:46.590896   25306 out.go:177]   - NO_PROXY=192.168.39.176,192.168.39.89
	W1014 13:56:46.592325   25306 proxy.go:119] fail to check proxy env: Error ip not in block
	W1014 13:56:46.592354   25306 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 13:56:46.592374   25306 main.go:141] libmachine: (ha-450021-m03) Calling .DriverName
	I1014 13:56:46.592957   25306 main.go:141] libmachine: (ha-450021-m03) Calling .DriverName
	I1014 13:56:46.593143   25306 main.go:141] libmachine: (ha-450021-m03) Calling .DriverName
	I1014 13:56:46.593217   25306 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 13:56:46.593262   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	W1014 13:56:46.593451   25306 proxy.go:119] fail to check proxy env: Error ip not in block
	W1014 13:56:46.593472   25306 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 13:56:46.593517   25306 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 13:56:46.593532   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	I1014 13:56:46.596078   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.596267   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.596474   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:46.596494   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.596667   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHPort
	I1014 13:56:46.596762   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:46.596784   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.596836   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:46.596933   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHPort
	I1014 13:56:46.597000   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHUsername
	I1014 13:56:46.597050   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:46.597134   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/id_rsa Username:docker}
	I1014 13:56:46.597191   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHUsername
	I1014 13:56:46.597299   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/id_rsa Username:docker}
	I1014 13:56:46.829516   25306 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 13:56:46.836362   25306 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 13:56:46.836435   25306 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 13:56:46.855005   25306 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 13:56:46.855034   25306 start.go:495] detecting cgroup driver to use...
	I1014 13:56:46.855101   25306 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 13:56:46.873805   25306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 13:56:46.888317   25306 docker.go:217] disabling cri-docker service (if available) ...
	I1014 13:56:46.888368   25306 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 13:56:46.902770   25306 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 13:56:46.916283   25306 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 13:56:47.031570   25306 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 13:56:47.186900   25306 docker.go:233] disabling docker service ...
	I1014 13:56:47.186971   25306 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 13:56:47.202040   25306 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 13:56:47.215421   25306 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 13:56:47.352807   25306 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 13:56:47.479560   25306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 13:56:47.493558   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 13:56:47.511643   25306 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 13:56:47.511704   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:56:47.521941   25306 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 13:56:47.522055   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:56:47.534488   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:56:47.545529   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:56:47.555346   25306 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 13:56:47.565221   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:56:47.574851   25306 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:56:47.591247   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:56:47.601017   25306 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 13:56:47.610150   25306 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 13:56:47.610208   25306 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 13:56:47.623643   25306 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 13:56:47.632860   25306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:56:47.769053   25306 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 13:56:47.859548   25306 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 13:56:47.859617   25306 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 13:56:47.864769   25306 start.go:563] Will wait 60s for crictl version
	I1014 13:56:47.864838   25306 ssh_runner.go:195] Run: which crictl
	I1014 13:56:47.868622   25306 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 13:56:47.912151   25306 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 13:56:47.912224   25306 ssh_runner.go:195] Run: crio --version
	I1014 13:56:47.943678   25306 ssh_runner.go:195] Run: crio --version
	I1014 13:56:47.974464   25306 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1014 13:56:47.975982   25306 out.go:177]   - env NO_PROXY=192.168.39.176
	I1014 13:56:47.977421   25306 out.go:177]   - env NO_PROXY=192.168.39.176,192.168.39.89
	I1014 13:56:47.978761   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetIP
	I1014 13:56:47.981382   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:47.981851   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:47.981880   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:47.982078   25306 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1014 13:56:47.986330   25306 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 13:56:47.999765   25306 mustload.go:65] Loading cluster: ha-450021
	I1014 13:56:47.999983   25306 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:56:48.000276   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:56:48.000314   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:56:48.015013   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38251
	I1014 13:56:48.015440   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:56:48.015880   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:56:48.015898   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:56:48.016248   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:56:48.016426   25306 main.go:141] libmachine: (ha-450021) Calling .GetState
	I1014 13:56:48.017904   25306 host.go:66] Checking if "ha-450021" exists ...
	I1014 13:56:48.018185   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:56:48.018221   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:56:48.032080   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38143
	I1014 13:56:48.032532   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:56:48.033010   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:56:48.033034   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:56:48.033376   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:56:48.033566   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:56:48.033738   25306 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021 for IP: 192.168.39.55
	I1014 13:56:48.033750   25306 certs.go:194] generating shared ca certs ...
	I1014 13:56:48.033771   25306 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:56:48.033910   25306 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 13:56:48.033951   25306 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 13:56:48.033962   25306 certs.go:256] generating profile certs ...
	I1014 13:56:48.034054   25306 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.key
	I1014 13:56:48.034099   25306 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.b8fc6ee2
	I1014 13:56:48.034119   25306 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.b8fc6ee2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.176 192.168.39.89 192.168.39.55 192.168.39.254]
	I1014 13:56:48.250009   25306 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.b8fc6ee2 ...
	I1014 13:56:48.250065   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.b8fc6ee2: {Name:mk915feb36aa4db7e40387e7070135b42d923437 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:56:48.250246   25306 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.b8fc6ee2 ...
	I1014 13:56:48.250260   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.b8fc6ee2: {Name:mk5df80a68a940fb5e6799020daa8453d1ca5d79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:56:48.250346   25306 certs.go:381] copying /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.b8fc6ee2 -> /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt
	I1014 13:56:48.250480   25306 certs.go:385] copying /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.b8fc6ee2 -> /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key
	I1014 13:56:48.250647   25306 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key
	I1014 13:56:48.250665   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 13:56:48.250682   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 13:56:48.250698   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 13:56:48.250714   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 13:56:48.250729   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 13:56:48.250744   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 13:56:48.250759   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 13:56:48.282713   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 13:56:48.282807   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 13:56:48.282843   25306 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 13:56:48.282853   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 13:56:48.282876   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 13:56:48.282899   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 13:56:48.282919   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 13:56:48.282958   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 13:56:48.282987   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem -> /usr/share/ca-certificates/15023.pem
	I1014 13:56:48.283001   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> /usr/share/ca-certificates/150232.pem
	I1014 13:56:48.283013   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:56:48.283046   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:56:48.285808   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:56:48.286249   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:56:48.286279   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:56:48.286442   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:56:48.286648   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:56:48.286791   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:56:48.286909   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 13:56:48.366887   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1014 13:56:48.372822   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1014 13:56:48.386233   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1014 13:56:48.391254   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1014 13:56:48.402846   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1014 13:56:48.407460   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1014 13:56:48.418138   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1014 13:56:48.423366   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1014 13:56:48.435286   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1014 13:56:48.442980   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1014 13:56:48.457010   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1014 13:56:48.462031   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1014 13:56:48.475327   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 13:56:48.499553   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 13:56:48.526670   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 13:56:48.552105   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 13:56:48.577419   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1014 13:56:48.600650   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 13:56:48.623847   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 13:56:48.649170   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 13:56:48.674110   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 13:56:48.700598   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 13:56:48.725176   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 13:56:48.750067   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1014 13:56:48.767549   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1014 13:56:48.786866   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1014 13:56:48.804737   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1014 13:56:48.822022   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1014 13:56:48.840501   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1014 13:56:48.858556   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1014 13:56:48.875294   25306 ssh_runner.go:195] Run: openssl version
	I1014 13:56:48.880974   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 13:56:48.892080   25306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 13:56:48.896904   25306 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 13:56:48.896954   25306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 13:56:48.902856   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 13:56:48.914212   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 13:56:48.926784   25306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 13:56:48.931725   25306 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 13:56:48.931780   25306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 13:56:48.937633   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 13:56:48.949727   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 13:56:48.960604   25306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:56:48.965337   25306 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:56:48.965398   25306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:56:48.970965   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 13:56:48.983521   25306 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 13:56:48.987988   25306 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 13:56:48.988067   25306 kubeadm.go:934] updating node {m03 192.168.39.55 8443 v1.31.1 crio true true} ...
	I1014 13:56:48.988197   25306 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-450021-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.55
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 13:56:48.988224   25306 kube-vip.go:115] generating kube-vip config ...
	I1014 13:56:48.988260   25306 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1014 13:56:49.006786   25306 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1014 13:56:49.006878   25306 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1014 13:56:49.006948   25306 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 13:56:49.017177   25306 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1014 13:56:49.017231   25306 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1014 13:56:49.027546   25306 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1014 13:56:49.027571   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1014 13:56:49.027572   25306 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1014 13:56:49.027592   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1014 13:56:49.027633   25306 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1014 13:56:49.027546   25306 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1014 13:56:49.027650   25306 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1014 13:56:49.027677   25306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 13:56:49.041850   25306 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1014 13:56:49.041880   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1014 13:56:49.059453   25306 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1014 13:56:49.059469   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1014 13:56:49.059486   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1014 13:56:49.059574   25306 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1014 13:56:49.108836   25306 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1014 13:56:49.108879   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1014 13:56:49.922146   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1014 13:56:49.934057   25306 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1014 13:56:49.951495   25306 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 13:56:49.969831   25306 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1014 13:56:49.987375   25306 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1014 13:56:49.991392   25306 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 13:56:50.004437   25306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:56:50.138457   25306 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 13:56:50.156141   25306 host.go:66] Checking if "ha-450021" exists ...
	I1014 13:56:50.156664   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:56:50.156719   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:56:50.172505   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34963
	I1014 13:56:50.172984   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:56:50.173395   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:56:50.173421   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:56:50.173801   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:56:50.173992   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:56:50.174119   25306 start.go:317] joinCluster: &{Name:ha-450021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 13:56:50.174253   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1014 13:56:50.174270   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:56:50.177090   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:56:50.177620   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:56:50.177652   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:56:50.177788   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:56:50.177965   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:56:50.178111   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:56:50.178264   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 13:56:50.344835   25306 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 13:56:50.344884   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zud3yn.6rxrec6p5rmcwb5b --discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-450021-m03 --control-plane --apiserver-advertise-address=192.168.39.55 --apiserver-bind-port=8443"
	I1014 13:57:13.924825   25306 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zud3yn.6rxrec6p5rmcwb5b --discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-450021-m03 --control-plane --apiserver-advertise-address=192.168.39.55 --apiserver-bind-port=8443": (23.579918283s)
	I1014 13:57:13.924874   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1014 13:57:14.548857   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-450021-m03 minikube.k8s.io/updated_at=2024_10_14T13_57_14_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=ha-450021 minikube.k8s.io/primary=false
	I1014 13:57:14.695478   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-450021-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1014 13:57:14.877781   25306 start.go:319] duration metric: took 24.703657095s to joinCluster
	I1014 13:57:14.877880   25306 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 13:57:14.878165   25306 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:57:14.879747   25306 out.go:177] * Verifying Kubernetes components...
	I1014 13:57:14.881030   25306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:57:15.185770   25306 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 13:57:15.218461   25306 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 13:57:15.218911   25306 kapi.go:59] client config for ha-450021: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.crt", KeyFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.key", CAFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432aa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1014 13:57:15.218986   25306 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.176:8443
	I1014 13:57:15.219237   25306 node_ready.go:35] waiting up to 6m0s for node "ha-450021-m03" to be "Ready" ...
	I1014 13:57:15.219350   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:15.219360   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:15.219373   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:15.219378   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:15.231145   25306 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1014 13:57:15.719481   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:15.719504   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:15.719515   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:15.719523   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:15.723133   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:16.219449   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:16.219474   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:16.219486   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:16.219493   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:16.222753   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:16.719775   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:16.719794   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:16.719801   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:16.719805   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:16.723148   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:17.220337   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:17.220374   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:17.220382   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:17.220385   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:17.223796   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:17.224523   25306 node_ready.go:53] node "ha-450021-m03" has status "Ready":"False"
	I1014 13:57:17.719785   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:17.719812   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:17.719823   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:17.719828   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:17.724599   25306 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 13:57:18.219479   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:18.219497   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:18.219505   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:18.219510   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:18.222903   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:18.719939   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:18.719958   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:18.719964   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:18.719968   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:18.722786   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:57:19.220210   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:19.220235   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:19.220246   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:19.220251   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:19.223890   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:19.719936   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:19.719957   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:19.719965   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:19.719968   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:19.725873   25306 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 13:57:19.726613   25306 node_ready.go:53] node "ha-450021-m03" has status "Ready":"False"
	I1014 13:57:20.219399   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:20.219418   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:20.219426   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:20.219429   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:20.222447   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:20.720283   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:20.720304   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:20.720311   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:20.720316   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:20.723293   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:57:21.219622   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:21.219643   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:21.219651   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:21.219655   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:21.223137   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:21.719413   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:21.719434   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:21.719441   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:21.719445   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:21.727130   25306 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1014 13:57:21.728875   25306 node_ready.go:53] node "ha-450021-m03" has status "Ready":"False"
	I1014 13:57:22.219563   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:22.219584   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:22.219593   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:22.219597   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:22.222980   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:22.719873   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:22.719897   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:22.719906   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:22.719910   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:22.723538   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:23.219424   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:23.219447   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:23.219456   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:23.219459   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:23.223288   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:23.719840   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:23.719863   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:23.719870   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:23.719874   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:23.725306   25306 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 13:57:24.220401   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:24.220427   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:24.220439   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:24.220448   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:24.224025   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:24.224423   25306 node_ready.go:53] node "ha-450021-m03" has status "Ready":"False"
	I1014 13:57:24.720285   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:24.720311   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:24.720323   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:24.720331   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:24.724123   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:25.219820   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:25.219841   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:25.219849   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:25.219852   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:25.223237   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:25.720061   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:25.720081   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:25.720090   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:25.720095   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:25.727909   25306 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1014 13:57:26.220029   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:26.220052   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:26.220060   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:26.220065   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:26.223671   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:26.719549   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:26.719569   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:26.719577   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:26.719581   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:26.724063   25306 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 13:57:26.724628   25306 node_ready.go:53] node "ha-450021-m03" has status "Ready":"False"
	I1014 13:57:27.220196   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:27.220218   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:27.220230   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:27.220239   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:27.227906   25306 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1014 13:57:27.719535   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:27.719576   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:27.719587   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:27.719592   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:27.727292   25306 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1014 13:57:28.219952   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:28.219973   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:28.219983   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:28.219988   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:28.223688   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:28.719432   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:28.719455   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:28.719463   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:28.719468   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:28.722896   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:29.219877   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:29.219901   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.219911   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.219915   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.223129   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:29.223965   25306 node_ready.go:49] node "ha-450021-m03" has status "Ready":"True"
	I1014 13:57:29.223987   25306 node_ready.go:38] duration metric: took 14.004731761s for node "ha-450021-m03" to be "Ready" ...
	I1014 13:57:29.223998   25306 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 13:57:29.224060   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods
	I1014 13:57:29.224068   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.224075   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.224081   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.230054   25306 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 13:57:29.238333   25306 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-btfml" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.238422   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-btfml
	I1014 13:57:29.238435   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.238446   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.238455   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.242284   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:29.243174   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:29.243194   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.243204   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.243210   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.245933   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:57:29.246411   25306 pod_ready.go:93] pod "coredns-7c65d6cfc9-btfml" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:29.246431   25306 pod_ready.go:82] duration metric: took 8.073653ms for pod "coredns-7c65d6cfc9-btfml" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.246440   25306 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-h5s6h" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.246494   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-h5s6h
	I1014 13:57:29.246505   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.246515   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.246521   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.248883   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:57:29.249550   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:29.249563   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.249569   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.249573   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.251738   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:57:29.252240   25306 pod_ready.go:93] pod "coredns-7c65d6cfc9-h5s6h" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:29.252260   25306 pod_ready.go:82] duration metric: took 5.813932ms for pod "coredns-7c65d6cfc9-h5s6h" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.252268   25306 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.252312   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/etcd-ha-450021
	I1014 13:57:29.252319   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.252326   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.252330   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.254629   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:57:29.255222   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:29.255236   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.255243   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.255248   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.257432   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:57:29.257842   25306 pod_ready.go:93] pod "etcd-ha-450021" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:29.257858   25306 pod_ready.go:82] duration metric: took 5.5841ms for pod "etcd-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.257865   25306 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.257906   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/etcd-ha-450021-m02
	I1014 13:57:29.257913   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.257920   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.257926   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.260016   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:57:29.260730   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:57:29.260748   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.260759   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.260766   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.262822   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:57:29.263416   25306 pod_ready.go:93] pod "etcd-ha-450021-m02" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:29.263434   25306 pod_ready.go:82] duration metric: took 5.562613ms for pod "etcd-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.263445   25306 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-450021-m03" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.420814   25306 request.go:632] Waited for 157.302029ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/etcd-ha-450021-m03
	I1014 13:57:29.420888   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/etcd-ha-450021-m03
	I1014 13:57:29.420896   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.420904   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.420911   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.423933   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:29.620244   25306 request.go:632] Waited for 195.721406ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:29.620303   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:29.620309   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.620331   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.620359   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.623721   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:29.624232   25306 pod_ready.go:93] pod "etcd-ha-450021-m03" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:29.624248   25306 pod_ready.go:82] duration metric: took 360.793531ms for pod "etcd-ha-450021-m03" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.624265   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.820803   25306 request.go:632] Waited for 196.4673ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450021
	I1014 13:57:29.820871   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450021
	I1014 13:57:29.820878   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.820888   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.820899   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.825055   25306 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 13:57:30.020658   25306 request.go:632] Waited for 194.868544ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:30.020728   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:30.020733   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:30.020740   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:30.020744   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:30.024136   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:30.024766   25306 pod_ready.go:93] pod "kube-apiserver-ha-450021" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:30.024782   25306 pod_ready.go:82] duration metric: took 400.510119ms for pod "kube-apiserver-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:30.024791   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:30.220429   25306 request.go:632] Waited for 195.542568ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450021-m02
	I1014 13:57:30.220491   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450021-m02
	I1014 13:57:30.220497   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:30.220508   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:30.220517   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:30.224059   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:30.420172   25306 request.go:632] Waited for 195.340177ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:57:30.420225   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:57:30.420231   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:30.420238   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:30.420243   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:30.423967   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:30.424613   25306 pod_ready.go:93] pod "kube-apiserver-ha-450021-m02" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:30.424631   25306 pod_ready.go:82] duration metric: took 399.833776ms for pod "kube-apiserver-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:30.424640   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-450021-m03" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:30.620846   25306 request.go:632] Waited for 196.141352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450021-m03
	I1014 13:57:30.620922   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450021-m03
	I1014 13:57:30.620928   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:30.620935   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:30.620942   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:30.624496   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:30.820849   25306 request.go:632] Waited for 195.396807ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:30.820939   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:30.820975   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:30.820988   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:30.820995   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:30.824502   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:30.825021   25306 pod_ready.go:93] pod "kube-apiserver-ha-450021-m03" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:30.825046   25306 pod_ready.go:82] duration metric: took 400.398723ms for pod "kube-apiserver-ha-450021-m03" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:30.825059   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:31.020285   25306 request.go:632] Waited for 195.157008ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450021
	I1014 13:57:31.020365   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450021
	I1014 13:57:31.020370   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:31.020385   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:31.020393   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:31.024268   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:31.220585   25306 request.go:632] Waited for 195.341359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:31.220643   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:31.220650   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:31.220659   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:31.220664   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:31.224268   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:31.224942   25306 pod_ready.go:93] pod "kube-controller-manager-ha-450021" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:31.224972   25306 pod_ready.go:82] duration metric: took 399.90441ms for pod "kube-controller-manager-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:31.224991   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:31.419861   25306 request.go:632] Waited for 194.791136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450021-m02
	I1014 13:57:31.419920   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450021-m02
	I1014 13:57:31.419926   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:31.419934   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:31.419939   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:31.423671   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:31.620170   25306 request.go:632] Waited for 195.363598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:57:31.620257   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:57:31.620267   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:31.620279   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:31.620289   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:31.623838   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:31.624806   25306 pod_ready.go:93] pod "kube-controller-manager-ha-450021-m02" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:31.624830   25306 pod_ready.go:82] duration metric: took 399.825307ms for pod "kube-controller-manager-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:31.624845   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-450021-m03" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:31.819925   25306 request.go:632] Waited for 194.986166ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450021-m03
	I1014 13:57:31.819986   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450021-m03
	I1014 13:57:31.819995   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:31.820007   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:31.820020   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:31.823660   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:32.020870   25306 request.go:632] Waited for 196.217554ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:32.020953   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:32.020964   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:32.020976   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:32.020984   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:32.024484   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:32.025120   25306 pod_ready.go:93] pod "kube-controller-manager-ha-450021-m03" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:32.025154   25306 pod_ready.go:82] duration metric: took 400.297134ms for pod "kube-controller-manager-ha-450021-m03" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:32.025174   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9tbfp" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:32.220154   25306 request.go:632] Waited for 194.89867ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9tbfp
	I1014 13:57:32.220222   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9tbfp
	I1014 13:57:32.220229   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:32.220239   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:32.220246   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:32.223571   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:32.420701   25306 request.go:632] Waited for 196.352524ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:32.420758   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:32.420763   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:32.420770   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:32.420774   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:32.424213   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:32.424900   25306 pod_ready.go:93] pod "kube-proxy-9tbfp" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:32.424923   25306 pod_ready.go:82] duration metric: took 399.74019ms for pod "kube-proxy-9tbfp" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:32.424936   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dmbpv" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:32.619849   25306 request.go:632] Waited for 194.848954ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dmbpv
	I1014 13:57:32.619902   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dmbpv
	I1014 13:57:32.619908   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:32.619915   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:32.619918   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:32.623593   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:32.820780   25306 request.go:632] Waited for 196.366155ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:32.820849   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:32.820854   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:32.820863   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:32.820870   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:32.824510   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:32.825180   25306 pod_ready.go:93] pod "kube-proxy-dmbpv" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:32.825196   25306 pod_ready.go:82] duration metric: took 400.2529ms for pod "kube-proxy-dmbpv" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:32.825205   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-v24tf" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:33.020309   25306 request.go:632] Waited for 195.030338ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v24tf
	I1014 13:57:33.020398   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v24tf
	I1014 13:57:33.020409   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:33.020421   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:33.020429   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:33.023944   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:33.220873   25306 request.go:632] Waited for 196.168894ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:57:33.220972   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:57:33.220984   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:33.221002   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:33.221010   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:33.224398   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:33.225139   25306 pod_ready.go:93] pod "kube-proxy-v24tf" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:33.225161   25306 pod_ready.go:82] duration metric: took 399.9482ms for pod "kube-proxy-v24tf" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:33.225174   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:33.420278   25306 request.go:632] Waited for 195.028059ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450021
	I1014 13:57:33.420352   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450021
	I1014 13:57:33.420358   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:33.420365   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:33.420370   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:33.423970   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:33.619940   25306 request.go:632] Waited for 195.292135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:33.620017   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:33.620024   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:33.620031   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:33.620038   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:33.623628   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:33.624429   25306 pod_ready.go:93] pod "kube-scheduler-ha-450021" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:33.624446   25306 pod_ready.go:82] duration metric: took 399.265054ms for pod "kube-scheduler-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:33.624456   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:33.820766   25306 request.go:632] Waited for 196.250065ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450021-m02
	I1014 13:57:33.820834   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450021-m02
	I1014 13:57:33.820840   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:33.820847   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:33.820861   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:33.824813   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:34.020844   25306 request.go:632] Waited for 195.391993ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:57:34.020901   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:57:34.020908   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:34.020915   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:34.020920   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:34.025139   25306 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 13:57:34.026105   25306 pod_ready.go:93] pod "kube-scheduler-ha-450021-m02" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:34.026127   25306 pod_ready.go:82] duration metric: took 401.663759ms for pod "kube-scheduler-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:34.026140   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-450021-m03" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:34.220315   25306 request.go:632] Waited for 194.095801ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450021-m03
	I1014 13:57:34.220368   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450021-m03
	I1014 13:57:34.220374   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:34.220381   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:34.220385   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:34.224012   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:34.420204   25306 request.go:632] Waited for 195.373756ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:34.420275   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:34.420280   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:34.420288   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:34.420292   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:34.424022   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:34.424779   25306 pod_ready.go:93] pod "kube-scheduler-ha-450021-m03" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:34.424801   25306 pod_ready.go:82] duration metric: took 398.654013ms for pod "kube-scheduler-ha-450021-m03" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:34.424816   25306 pod_ready.go:39] duration metric: took 5.200801864s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 13:57:34.424833   25306 api_server.go:52] waiting for apiserver process to appear ...
	I1014 13:57:34.424888   25306 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 13:57:34.443450   25306 api_server.go:72] duration metric: took 19.56551851s to wait for apiserver process to appear ...
	I1014 13:57:34.443480   25306 api_server.go:88] waiting for apiserver healthz status ...
	I1014 13:57:34.443507   25306 api_server.go:253] Checking apiserver healthz at https://192.168.39.176:8443/healthz ...
	I1014 13:57:34.447984   25306 api_server.go:279] https://192.168.39.176:8443/healthz returned 200:
	ok
	I1014 13:57:34.448076   25306 round_trippers.go:463] GET https://192.168.39.176:8443/version
	I1014 13:57:34.448089   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:34.448100   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:34.448108   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:34.449007   25306 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1014 13:57:34.449084   25306 api_server.go:141] control plane version: v1.31.1
	I1014 13:57:34.449104   25306 api_server.go:131] duration metric: took 5.616812ms to wait for apiserver health ...
	I1014 13:57:34.449115   25306 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 13:57:34.620303   25306 request.go:632] Waited for 171.103547ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods
	I1014 13:57:34.620363   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods
	I1014 13:57:34.620370   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:34.620380   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:34.620385   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:34.626531   25306 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 13:57:34.632849   25306 system_pods.go:59] 24 kube-system pods found
	I1014 13:57:34.632878   25306 system_pods.go:61] "coredns-7c65d6cfc9-btfml" [292e08ef-5eec-4ebb-acf5-5b4b03e47724] Running
	I1014 13:57:34.632883   25306 system_pods.go:61] "coredns-7c65d6cfc9-h5s6h" [bf78614c-8f22-48f9-8a16-cfcffecadfcc] Running
	I1014 13:57:34.632887   25306 system_pods.go:61] "etcd-ha-450021" [d3e4a252-6d4a-4617-99f8-416ddaa8e695] Running
	I1014 13:57:34.632891   25306 system_pods.go:61] "etcd-ha-450021-m02" [d890c5b4-c756-42a4-a549-59b46d9fa0f6] Running
	I1014 13:57:34.632894   25306 system_pods.go:61] "etcd-ha-450021-m03" [ceded083-0662-41fd-9317-3f7debf0252b] Running
	I1014 13:57:34.632897   25306 system_pods.go:61] "kindnet-2ghzc" [f725a811-6a0e-433c-913d-079b7bc4742f] Running
	I1014 13:57:34.632900   25306 system_pods.go:61] "kindnet-7jwgx" [c4607bd9-32b8-401b-a74e-b20d6f63ce03] Running
	I1014 13:57:34.632903   25306 system_pods.go:61] "kindnet-c2xkn" [0f821123-80f9-4fe5-b64c-fb641ec185ea] Running
	I1014 13:57:34.632906   25306 system_pods.go:61] "kube-apiserver-ha-450021" [3c355a29-9ac5-466a-974f-22fc58429b98] Running
	I1014 13:57:34.632909   25306 system_pods.go:61] "kube-apiserver-ha-450021-m02" [5e9f016e-2b42-4301-964f-8e2af49d0d08] Running
	I1014 13:57:34.632911   25306 system_pods.go:61] "kube-apiserver-ha-450021-m03" [3521d4f5-b657-4f3c-a36e-a855d81590e9] Running
	I1014 13:57:34.632915   25306 system_pods.go:61] "kube-controller-manager-ha-450021" [b002ddcb-0bb2-44f5-a779-20df99f3cab5] Running
	I1014 13:57:34.632917   25306 system_pods.go:61] "kube-controller-manager-ha-450021-m02" [f7be35b1-380c-4f40-a1d6-5522b961917c] Running
	I1014 13:57:34.632920   25306 system_pods.go:61] "kube-controller-manager-ha-450021-m03" [56960cdf-61e7-4251-8fa5-7034b7aeffcd] Running
	I1014 13:57:34.632923   25306 system_pods.go:61] "kube-proxy-9tbfp" [fc30758d-16af-4818-9414-e78ee865fb7d] Running
	I1014 13:57:34.632926   25306 system_pods.go:61] "kube-proxy-dmbpv" [e09737a1-c663-4951-b6cb-c0690fbd8153] Running
	I1014 13:57:34.632929   25306 system_pods.go:61] "kube-proxy-v24tf" [49b626fc-4017-45f7-a44f-43f3b311d0e0] Running
	I1014 13:57:34.632931   25306 system_pods.go:61] "kube-scheduler-ha-450021" [2f216272-b604-4f1c-ad4b-fdb874a78cf4] Running
	I1014 13:57:34.632934   25306 system_pods.go:61] "kube-scheduler-ha-450021-m02" [cfa4bb4e-6a32-4b4b-85df-2c7b1a356a4a] Running
	I1014 13:57:34.632937   25306 system_pods.go:61] "kube-scheduler-ha-450021-m03" [11cfe784-95d9-48fb-ab0c-334d4136c207] Running
	I1014 13:57:34.632940   25306 system_pods.go:61] "kube-vip-ha-450021" [e5340482-7ea5-4299-8096-a2f292c4bfdd] Running
	I1014 13:57:34.632942   25306 system_pods.go:61] "kube-vip-ha-450021-m02" [6a409d8d-9566-4caa-af5a-0dbe7b9f6cec] Running
	I1014 13:57:34.632946   25306 system_pods.go:61] "kube-vip-ha-450021-m03" [de6e64e3-5d83-4ca7-8618-279cca6bf0c1] Running
	I1014 13:57:34.632948   25306 system_pods.go:61] "storage-provisioner" [1377adb3-3faf-4dee-a86e-9c4544a02d51] Running
	I1014 13:57:34.632953   25306 system_pods.go:74] duration metric: took 183.830824ms to wait for pod list to return data ...
	I1014 13:57:34.632963   25306 default_sa.go:34] waiting for default service account to be created ...
	I1014 13:57:34.820472   25306 request.go:632] Waited for 187.441614ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/default/serviceaccounts
	I1014 13:57:34.820540   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/default/serviceaccounts
	I1014 13:57:34.820546   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:34.820553   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:34.820563   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:34.824880   25306 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 13:57:34.824982   25306 default_sa.go:45] found service account: "default"
	I1014 13:57:34.824994   25306 default_sa.go:55] duration metric: took 192.026288ms for default service account to be created ...
	I1014 13:57:34.825002   25306 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 13:57:35.020105   25306 request.go:632] Waited for 195.031126ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods
	I1014 13:57:35.020178   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods
	I1014 13:57:35.020187   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:35.020199   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:35.020209   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:35.026365   25306 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 13:57:35.032685   25306 system_pods.go:86] 24 kube-system pods found
	I1014 13:57:35.032713   25306 system_pods.go:89] "coredns-7c65d6cfc9-btfml" [292e08ef-5eec-4ebb-acf5-5b4b03e47724] Running
	I1014 13:57:35.032719   25306 system_pods.go:89] "coredns-7c65d6cfc9-h5s6h" [bf78614c-8f22-48f9-8a16-cfcffecadfcc] Running
	I1014 13:57:35.032722   25306 system_pods.go:89] "etcd-ha-450021" [d3e4a252-6d4a-4617-99f8-416ddaa8e695] Running
	I1014 13:57:35.032727   25306 system_pods.go:89] "etcd-ha-450021-m02" [d890c5b4-c756-42a4-a549-59b46d9fa0f6] Running
	I1014 13:57:35.032731   25306 system_pods.go:89] "etcd-ha-450021-m03" [ceded083-0662-41fd-9317-3f7debf0252b] Running
	I1014 13:57:35.032736   25306 system_pods.go:89] "kindnet-2ghzc" [f725a811-6a0e-433c-913d-079b7bc4742f] Running
	I1014 13:57:35.032739   25306 system_pods.go:89] "kindnet-7jwgx" [c4607bd9-32b8-401b-a74e-b20d6f63ce03] Running
	I1014 13:57:35.032743   25306 system_pods.go:89] "kindnet-c2xkn" [0f821123-80f9-4fe5-b64c-fb641ec185ea] Running
	I1014 13:57:35.032747   25306 system_pods.go:89] "kube-apiserver-ha-450021" [3c355a29-9ac5-466a-974f-22fc58429b98] Running
	I1014 13:57:35.032751   25306 system_pods.go:89] "kube-apiserver-ha-450021-m02" [5e9f016e-2b42-4301-964f-8e2af49d0d08] Running
	I1014 13:57:35.032754   25306 system_pods.go:89] "kube-apiserver-ha-450021-m03" [3521d4f5-b657-4f3c-a36e-a855d81590e9] Running
	I1014 13:57:35.032758   25306 system_pods.go:89] "kube-controller-manager-ha-450021" [b002ddcb-0bb2-44f5-a779-20df99f3cab5] Running
	I1014 13:57:35.032763   25306 system_pods.go:89] "kube-controller-manager-ha-450021-m02" [f7be35b1-380c-4f40-a1d6-5522b961917c] Running
	I1014 13:57:35.032770   25306 system_pods.go:89] "kube-controller-manager-ha-450021-m03" [56960cdf-61e7-4251-8fa5-7034b7aeffcd] Running
	I1014 13:57:35.032774   25306 system_pods.go:89] "kube-proxy-9tbfp" [fc30758d-16af-4818-9414-e78ee865fb7d] Running
	I1014 13:57:35.032780   25306 system_pods.go:89] "kube-proxy-dmbpv" [e09737a1-c663-4951-b6cb-c0690fbd8153] Running
	I1014 13:57:35.032783   25306 system_pods.go:89] "kube-proxy-v24tf" [49b626fc-4017-45f7-a44f-43f3b311d0e0] Running
	I1014 13:57:35.032789   25306 system_pods.go:89] "kube-scheduler-ha-450021" [2f216272-b604-4f1c-ad4b-fdb874a78cf4] Running
	I1014 13:57:35.032793   25306 system_pods.go:89] "kube-scheduler-ha-450021-m02" [cfa4bb4e-6a32-4b4b-85df-2c7b1a356a4a] Running
	I1014 13:57:35.032799   25306 system_pods.go:89] "kube-scheduler-ha-450021-m03" [11cfe784-95d9-48fb-ab0c-334d4136c207] Running
	I1014 13:57:35.032803   25306 system_pods.go:89] "kube-vip-ha-450021" [e5340482-7ea5-4299-8096-a2f292c4bfdd] Running
	I1014 13:57:35.032808   25306 system_pods.go:89] "kube-vip-ha-450021-m02" [6a409d8d-9566-4caa-af5a-0dbe7b9f6cec] Running
	I1014 13:57:35.032811   25306 system_pods.go:89] "kube-vip-ha-450021-m03" [de6e64e3-5d83-4ca7-8618-279cca6bf0c1] Running
	I1014 13:57:35.032816   25306 system_pods.go:89] "storage-provisioner" [1377adb3-3faf-4dee-a86e-9c4544a02d51] Running
	I1014 13:57:35.032822   25306 system_pods.go:126] duration metric: took 207.815391ms to wait for k8s-apps to be running ...
	I1014 13:57:35.032831   25306 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 13:57:35.032872   25306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 13:57:35.048661   25306 system_svc.go:56] duration metric: took 15.819815ms WaitForService to wait for kubelet
	I1014 13:57:35.048694   25306 kubeadm.go:582] duration metric: took 20.170783435s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 13:57:35.048713   25306 node_conditions.go:102] verifying NodePressure condition ...
	I1014 13:57:35.220270   25306 request.go:632] Waited for 171.481631ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes
	I1014 13:57:35.220338   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes
	I1014 13:57:35.220343   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:35.220351   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:35.220356   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:35.224271   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:35.225220   25306 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 13:57:35.225243   25306 node_conditions.go:123] node cpu capacity is 2
	I1014 13:57:35.225255   25306 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 13:57:35.225258   25306 node_conditions.go:123] node cpu capacity is 2
	I1014 13:57:35.225264   25306 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 13:57:35.225268   25306 node_conditions.go:123] node cpu capacity is 2
	I1014 13:57:35.225272   25306 node_conditions.go:105] duration metric: took 176.55497ms to run NodePressure ...
	I1014 13:57:35.225286   25306 start.go:241] waiting for startup goroutines ...
	I1014 13:57:35.225306   25306 start.go:255] writing updated cluster config ...
	I1014 13:57:35.225629   25306 ssh_runner.go:195] Run: rm -f paused
	I1014 13:57:35.278941   25306 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1014 13:57:35.281235   25306 out.go:177] * Done! kubectl is now configured to use "ha-450021" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 14 14:01:13 ha-450021 crio[655]: time="2024-10-14 14:01:13.737991682Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914473737965429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fa3688cd-6095-4c4f-9674-ba665af86121 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 14:01:13 ha-450021 crio[655]: time="2024-10-14 14:01:13.738933612Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e2244c58-3a9e-4090-86de-ad08c59a7c00 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:13 ha-450021 crio[655]: time="2024-10-14 14:01:13.738991156Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e2244c58-3a9e-4090-86de-ad08c59a7c00 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:13 ha-450021 crio[655]: time="2024-10-14 14:01:13.739228856Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a41053c31fcb74ad24a4417c885436510a42c2e477d721651ae65459748bfd17,PodSandboxId:c3201918bd10d1535ddb2ebef0aa3b55e3e997e18a90de29ee09c2a7cb289b47,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728914259057513833,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fkz82,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07dccd61-4a5a-4d82-ba70-df7e6ff6bb4c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1051cfacf1c9fba1500a3437ece4de024c0fac626340151d2e28cbc18dc67a85,PodSandboxId:49d4b2387dd65dbd67bcdc3c377ba15e05400c782a4e2980358881a9c87ca5f3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728914119581188349,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1377adb3-3faf-4dee-a86e-9c4544a02d51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b17b6d38f935951dfa1746d02ec45095af8e06f6258ed80913feba7a10224927,PodSandboxId:b83407d74496b7f16cdeead48267cc803ffacd743feae034b1233a8c93800582,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728914119554752984,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-btfml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292e08ef-5eec-4ebb-acf5-5b4b03e47724,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:138a0b23a09075071550a4b7808439fd0baef4054fc6a7a7d4e8bc9a4457abfe,PodSandboxId:e862ae5ec13c39ac9605ac5725a1018466957149e1a69b2e013f7a87d5095bee,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728914119562072468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h5s6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf78614c-8f
22-48f9-8a16-cfcffecadfcc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15af89d835eebb58d825b5cdfdcbcfc064fe27d95caa6667adfb0e714974996,PodSandboxId:10ad22ab64de39acac4028e06deccb0ee0084112ba58c2349599913bf0d931d6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728914107455260455,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-c2xkn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f821123-80f9-4fe5-b64c-fb641ec185ea,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eec863af38c114b5058f678da27f8ce8608a5cd97566d4e704e07ff87100124,PodSandboxId:40a3318e89ae5bc2fe2d145b32f19e419934ba96586add9c17a653799fad9d26,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172891410
4698984942,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmbpv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e09737a1-c663-4951-b6cb-c0690fbd8153,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69f6cdf690df6514a349ce87c438a718209e9a098486e719653e5ac84d645899,PodSandboxId:dcc284c053db656af8f5da1c1a80672bfee0353e44ea6e4a01814f37351dad87,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17289140950
79963768,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67c899a1266c35ae5a8a71fac8e2760,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4efae268f9ec331abbf180a9264d60144b2a22485b89d39a46207f1c40454221,PodSandboxId:ce558cb07ca8f68689235cad5912b7da5a8f1c75775d2e5f2e7e823fe5127da9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728914093274186361,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d575d608bbdadce4a654f35576809ec,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fbfff3b334bde93db2f81855492434f8be70767826f2e33734ab52ad522a7a,PodSandboxId:ee3335073bb66b262b3eabf6a735be75c2ddcef2fa54aff9245585e26dd713f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728914093280862312,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca49fb553a9c26ea8ae634afb933e7b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ebec97dfd405a7e2c8ad77d0255ca029054cfb1090eba8d4d3851bdb68213e1,PodSandboxId:bc7fe679de4dc3fdff7f7e05bcd59ce354148a5c261197612bf284921530e902,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728914093233135044,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d8c37c1aa9e38ec5865c9c3159f1b5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942c179e591a9c0a8a1d869cfc5456dcbfb37c78056f256b241c51aab8936a3e,PodSandboxId:efaae5865d8afa77d2901173ba9c38ea901ca40f040d82cc15e889b37ff5a83c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728914093143514748,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c293b9606d38e94bf353b2714c2a069,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e2244c58-3a9e-4090-86de-ad08c59a7c00 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:13 ha-450021 crio[655]: time="2024-10-14 14:01:13.780290356Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2aa1bfc5-35f9-468e-af32-7d72d1756835 name=/runtime.v1.RuntimeService/Version
	Oct 14 14:01:13 ha-450021 crio[655]: time="2024-10-14 14:01:13.780377080Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2aa1bfc5-35f9-468e-af32-7d72d1756835 name=/runtime.v1.RuntimeService/Version
	Oct 14 14:01:13 ha-450021 crio[655]: time="2024-10-14 14:01:13.781629546Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9d78d2e5-24be-4fa9-baf5-cb3b3ccf4eb8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 14:01:13 ha-450021 crio[655]: time="2024-10-14 14:01:13.782094571Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914473782069146,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9d78d2e5-24be-4fa9-baf5-cb3b3ccf4eb8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 14:01:13 ha-450021 crio[655]: time="2024-10-14 14:01:13.782761935Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e7e70641-e534-48b6-b2e0-c77a30c45745 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:13 ha-450021 crio[655]: time="2024-10-14 14:01:13.782833578Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e7e70641-e534-48b6-b2e0-c77a30c45745 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:13 ha-450021 crio[655]: time="2024-10-14 14:01:13.783064370Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a41053c31fcb74ad24a4417c885436510a42c2e477d721651ae65459748bfd17,PodSandboxId:c3201918bd10d1535ddb2ebef0aa3b55e3e997e18a90de29ee09c2a7cb289b47,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728914259057513833,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fkz82,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07dccd61-4a5a-4d82-ba70-df7e6ff6bb4c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1051cfacf1c9fba1500a3437ece4de024c0fac626340151d2e28cbc18dc67a85,PodSandboxId:49d4b2387dd65dbd67bcdc3c377ba15e05400c782a4e2980358881a9c87ca5f3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728914119581188349,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1377adb3-3faf-4dee-a86e-9c4544a02d51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b17b6d38f935951dfa1746d02ec45095af8e06f6258ed80913feba7a10224927,PodSandboxId:b83407d74496b7f16cdeead48267cc803ffacd743feae034b1233a8c93800582,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728914119554752984,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-btfml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292e08ef-5eec-4ebb-acf5-5b4b03e47724,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:138a0b23a09075071550a4b7808439fd0baef4054fc6a7a7d4e8bc9a4457abfe,PodSandboxId:e862ae5ec13c39ac9605ac5725a1018466957149e1a69b2e013f7a87d5095bee,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728914119562072468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h5s6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf78614c-8f
22-48f9-8a16-cfcffecadfcc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15af89d835eebb58d825b5cdfdcbcfc064fe27d95caa6667adfb0e714974996,PodSandboxId:10ad22ab64de39acac4028e06deccb0ee0084112ba58c2349599913bf0d931d6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728914107455260455,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-c2xkn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f821123-80f9-4fe5-b64c-fb641ec185ea,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eec863af38c114b5058f678da27f8ce8608a5cd97566d4e704e07ff87100124,PodSandboxId:40a3318e89ae5bc2fe2d145b32f19e419934ba96586add9c17a653799fad9d26,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172891410
4698984942,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmbpv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e09737a1-c663-4951-b6cb-c0690fbd8153,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69f6cdf690df6514a349ce87c438a718209e9a098486e719653e5ac84d645899,PodSandboxId:dcc284c053db656af8f5da1c1a80672bfee0353e44ea6e4a01814f37351dad87,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17289140950
79963768,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67c899a1266c35ae5a8a71fac8e2760,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4efae268f9ec331abbf180a9264d60144b2a22485b89d39a46207f1c40454221,PodSandboxId:ce558cb07ca8f68689235cad5912b7da5a8f1c75775d2e5f2e7e823fe5127da9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728914093274186361,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d575d608bbdadce4a654f35576809ec,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fbfff3b334bde93db2f81855492434f8be70767826f2e33734ab52ad522a7a,PodSandboxId:ee3335073bb66b262b3eabf6a735be75c2ddcef2fa54aff9245585e26dd713f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728914093280862312,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca49fb553a9c26ea8ae634afb933e7b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ebec97dfd405a7e2c8ad77d0255ca029054cfb1090eba8d4d3851bdb68213e1,PodSandboxId:bc7fe679de4dc3fdff7f7e05bcd59ce354148a5c261197612bf284921530e902,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728914093233135044,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d8c37c1aa9e38ec5865c9c3159f1b5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942c179e591a9c0a8a1d869cfc5456dcbfb37c78056f256b241c51aab8936a3e,PodSandboxId:efaae5865d8afa77d2901173ba9c38ea901ca40f040d82cc15e889b37ff5a83c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728914093143514748,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c293b9606d38e94bf353b2714c2a069,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e7e70641-e534-48b6-b2e0-c77a30c45745 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:13 ha-450021 crio[655]: time="2024-10-14 14:01:13.823088302Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=137506e7-5f68-48a6-9852-83cacffc0e1d name=/runtime.v1.RuntimeService/Version
	Oct 14 14:01:13 ha-450021 crio[655]: time="2024-10-14 14:01:13.823201869Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=137506e7-5f68-48a6-9852-83cacffc0e1d name=/runtime.v1.RuntimeService/Version
	Oct 14 14:01:13 ha-450021 crio[655]: time="2024-10-14 14:01:13.825417552Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d338b15b-2e2d-464d-bb6f-6553c10c4017 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 14:01:13 ha-450021 crio[655]: time="2024-10-14 14:01:13.825973338Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914473825942942,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d338b15b-2e2d-464d-bb6f-6553c10c4017 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 14:01:13 ha-450021 crio[655]: time="2024-10-14 14:01:13.827402187Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=550ddd9a-d3e7-49ad-a382-5d1e6cdafb46 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:13 ha-450021 crio[655]: time="2024-10-14 14:01:13.827504664Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=550ddd9a-d3e7-49ad-a382-5d1e6cdafb46 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:13 ha-450021 crio[655]: time="2024-10-14 14:01:13.827880035Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a41053c31fcb74ad24a4417c885436510a42c2e477d721651ae65459748bfd17,PodSandboxId:c3201918bd10d1535ddb2ebef0aa3b55e3e997e18a90de29ee09c2a7cb289b47,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728914259057513833,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fkz82,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07dccd61-4a5a-4d82-ba70-df7e6ff6bb4c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1051cfacf1c9fba1500a3437ece4de024c0fac626340151d2e28cbc18dc67a85,PodSandboxId:49d4b2387dd65dbd67bcdc3c377ba15e05400c782a4e2980358881a9c87ca5f3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728914119581188349,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1377adb3-3faf-4dee-a86e-9c4544a02d51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b17b6d38f935951dfa1746d02ec45095af8e06f6258ed80913feba7a10224927,PodSandboxId:b83407d74496b7f16cdeead48267cc803ffacd743feae034b1233a8c93800582,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728914119554752984,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-btfml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292e08ef-5eec-4ebb-acf5-5b4b03e47724,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:138a0b23a09075071550a4b7808439fd0baef4054fc6a7a7d4e8bc9a4457abfe,PodSandboxId:e862ae5ec13c39ac9605ac5725a1018466957149e1a69b2e013f7a87d5095bee,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728914119562072468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h5s6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf78614c-8f
22-48f9-8a16-cfcffecadfcc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15af89d835eebb58d825b5cdfdcbcfc064fe27d95caa6667adfb0e714974996,PodSandboxId:10ad22ab64de39acac4028e06deccb0ee0084112ba58c2349599913bf0d931d6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728914107455260455,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-c2xkn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f821123-80f9-4fe5-b64c-fb641ec185ea,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eec863af38c114b5058f678da27f8ce8608a5cd97566d4e704e07ff87100124,PodSandboxId:40a3318e89ae5bc2fe2d145b32f19e419934ba96586add9c17a653799fad9d26,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172891410
4698984942,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmbpv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e09737a1-c663-4951-b6cb-c0690fbd8153,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69f6cdf690df6514a349ce87c438a718209e9a098486e719653e5ac84d645899,PodSandboxId:dcc284c053db656af8f5da1c1a80672bfee0353e44ea6e4a01814f37351dad87,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17289140950
79963768,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67c899a1266c35ae5a8a71fac8e2760,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4efae268f9ec331abbf180a9264d60144b2a22485b89d39a46207f1c40454221,PodSandboxId:ce558cb07ca8f68689235cad5912b7da5a8f1c75775d2e5f2e7e823fe5127da9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728914093274186361,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d575d608bbdadce4a654f35576809ec,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fbfff3b334bde93db2f81855492434f8be70767826f2e33734ab52ad522a7a,PodSandboxId:ee3335073bb66b262b3eabf6a735be75c2ddcef2fa54aff9245585e26dd713f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728914093280862312,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca49fb553a9c26ea8ae634afb933e7b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ebec97dfd405a7e2c8ad77d0255ca029054cfb1090eba8d4d3851bdb68213e1,PodSandboxId:bc7fe679de4dc3fdff7f7e05bcd59ce354148a5c261197612bf284921530e902,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728914093233135044,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d8c37c1aa9e38ec5865c9c3159f1b5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942c179e591a9c0a8a1d869cfc5456dcbfb37c78056f256b241c51aab8936a3e,PodSandboxId:efaae5865d8afa77d2901173ba9c38ea901ca40f040d82cc15e889b37ff5a83c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728914093143514748,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c293b9606d38e94bf353b2714c2a069,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=550ddd9a-d3e7-49ad-a382-5d1e6cdafb46 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:13 ha-450021 crio[655]: time="2024-10-14 14:01:13.874763107Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=50803b31-d0f3-4f51-b4ca-8daa0344faf1 name=/runtime.v1.RuntimeService/Version
	Oct 14 14:01:13 ha-450021 crio[655]: time="2024-10-14 14:01:13.874980313Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=50803b31-d0f3-4f51-b4ca-8daa0344faf1 name=/runtime.v1.RuntimeService/Version
	Oct 14 14:01:13 ha-450021 crio[655]: time="2024-10-14 14:01:13.876318513Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6bb031a1-e90c-4786-b6ff-33e6439975bb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 14:01:13 ha-450021 crio[655]: time="2024-10-14 14:01:13.877134090Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914473877101165,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6bb031a1-e90c-4786-b6ff-33e6439975bb name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 14:01:13 ha-450021 crio[655]: time="2024-10-14 14:01:13.877697844Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=40ecaa32-725c-49d6-b195-efec67c77e33 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:13 ha-450021 crio[655]: time="2024-10-14 14:01:13.877752130Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=40ecaa32-725c-49d6-b195-efec67c77e33 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:13 ha-450021 crio[655]: time="2024-10-14 14:01:13.877985779Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a41053c31fcb74ad24a4417c885436510a42c2e477d721651ae65459748bfd17,PodSandboxId:c3201918bd10d1535ddb2ebef0aa3b55e3e997e18a90de29ee09c2a7cb289b47,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728914259057513833,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fkz82,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07dccd61-4a5a-4d82-ba70-df7e6ff6bb4c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1051cfacf1c9fba1500a3437ece4de024c0fac626340151d2e28cbc18dc67a85,PodSandboxId:49d4b2387dd65dbd67bcdc3c377ba15e05400c782a4e2980358881a9c87ca5f3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728914119581188349,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1377adb3-3faf-4dee-a86e-9c4544a02d51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b17b6d38f935951dfa1746d02ec45095af8e06f6258ed80913feba7a10224927,PodSandboxId:b83407d74496b7f16cdeead48267cc803ffacd743feae034b1233a8c93800582,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728914119554752984,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-btfml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292e08ef-5eec-4ebb-acf5-5b4b03e47724,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:138a0b23a09075071550a4b7808439fd0baef4054fc6a7a7d4e8bc9a4457abfe,PodSandboxId:e862ae5ec13c39ac9605ac5725a1018466957149e1a69b2e013f7a87d5095bee,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728914119562072468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h5s6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf78614c-8f
22-48f9-8a16-cfcffecadfcc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15af89d835eebb58d825b5cdfdcbcfc064fe27d95caa6667adfb0e714974996,PodSandboxId:10ad22ab64de39acac4028e06deccb0ee0084112ba58c2349599913bf0d931d6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728914107455260455,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-c2xkn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f821123-80f9-4fe5-b64c-fb641ec185ea,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eec863af38c114b5058f678da27f8ce8608a5cd97566d4e704e07ff87100124,PodSandboxId:40a3318e89ae5bc2fe2d145b32f19e419934ba96586add9c17a653799fad9d26,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172891410
4698984942,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmbpv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e09737a1-c663-4951-b6cb-c0690fbd8153,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69f6cdf690df6514a349ce87c438a718209e9a098486e719653e5ac84d645899,PodSandboxId:dcc284c053db656af8f5da1c1a80672bfee0353e44ea6e4a01814f37351dad87,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17289140950
79963768,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67c899a1266c35ae5a8a71fac8e2760,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4efae268f9ec331abbf180a9264d60144b2a22485b89d39a46207f1c40454221,PodSandboxId:ce558cb07ca8f68689235cad5912b7da5a8f1c75775d2e5f2e7e823fe5127da9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728914093274186361,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d575d608bbdadce4a654f35576809ec,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fbfff3b334bde93db2f81855492434f8be70767826f2e33734ab52ad522a7a,PodSandboxId:ee3335073bb66b262b3eabf6a735be75c2ddcef2fa54aff9245585e26dd713f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728914093280862312,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca49fb553a9c26ea8ae634afb933e7b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ebec97dfd405a7e2c8ad77d0255ca029054cfb1090eba8d4d3851bdb68213e1,PodSandboxId:bc7fe679de4dc3fdff7f7e05bcd59ce354148a5c261197612bf284921530e902,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728914093233135044,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d8c37c1aa9e38ec5865c9c3159f1b5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942c179e591a9c0a8a1d869cfc5456dcbfb37c78056f256b241c51aab8936a3e,PodSandboxId:efaae5865d8afa77d2901173ba9c38ea901ca40f040d82cc15e889b37ff5a83c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728914093143514748,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c293b9606d38e94bf353b2714c2a069,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=40ecaa32-725c-49d6-b195-efec67c77e33 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a41053c31fcb7       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   c3201918bd10d       busybox-7dff88458-fkz82
	1051cfacf1c9f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   49d4b2387dd65       storage-provisioner
	138a0b23a0907       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   e862ae5ec13c3       coredns-7c65d6cfc9-h5s6h
	b17b6d38f9359       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   b83407d74496b       coredns-7c65d6cfc9-btfml
	b15af89d835ee       docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387    6 minutes ago       Running             kindnet-cni               0                   10ad22ab64de3       kindnet-c2xkn
	5eec863af38c1       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   40a3318e89ae5       kube-proxy-dmbpv
	69f6cdf690df6       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   dcc284c053db6       kube-vip-ha-450021
	09fbfff3b334b       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   ee3335073bb66       kube-controller-manager-ha-450021
	4efae268f9ec3       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   ce558cb07ca8f       kube-scheduler-ha-450021
	6ebec97dfd405       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   bc7fe679de4dc       etcd-ha-450021
	942c179e591a9       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   efaae5865d8af       kube-apiserver-ha-450021
	
	
	==> coredns [138a0b23a09075071550a4b7808439fd0baef4054fc6a7a7d4e8bc9a4457abfe] <==
	[INFO] 10.244.1.2:43382 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000121511s
	[INFO] 10.244.1.2:47675 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001762532s
	[INFO] 10.244.0.4:45515 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000083904s
	[INFO] 10.244.0.4:48451 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000149827s
	[INFO] 10.244.0.4:36014 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00015272s
	[INFO] 10.244.2.2:40959 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000194596s
	[INFO] 10.244.2.2:44151 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000212714s
	[INFO] 10.244.2.2:55911 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089682s
	[INFO] 10.244.1.2:47272 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001299918s
	[INFO] 10.244.1.2:44591 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078031s
	[INFO] 10.244.1.2:37471 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072637s
	[INFO] 10.244.0.4:52930 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152779s
	[INFO] 10.244.0.4:33266 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005592s
	[INFO] 10.244.2.2:36389 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000275257s
	[INFO] 10.244.2.2:43232 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010928s
	[INFO] 10.244.2.2:38102 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092762s
	[INFO] 10.244.1.2:55403 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000222145s
	[INFO] 10.244.1.2:52540 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102916s
	[INFO] 10.244.0.4:54154 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135993s
	[INFO] 10.244.0.4:36974 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000196993s
	[INFO] 10.244.0.4:54725 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000084888s
	[INFO] 10.244.2.2:57068 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000174437s
	[INFO] 10.244.1.2:46234 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191287s
	[INFO] 10.244.1.2:39695 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000080939s
	[INFO] 10.244.1.2:36634 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000064427s
	
	
	==> coredns [b17b6d38f935951dfa1746d02ec45095af8e06f6258ed80913feba7a10224927] <==
	[INFO] 10.244.0.4:50854 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009051191s
	[INFO] 10.244.0.4:34637 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000156712s
	[INFO] 10.244.0.4:33648 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081153s
	[INFO] 10.244.0.4:57465 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003251096s
	[INFO] 10.244.0.4:51433 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000118067s
	[INFO] 10.244.2.2:37621 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200056s
	[INFO] 10.244.2.2:41751 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001978554s
	[INFO] 10.244.2.2:33044 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001486731s
	[INFO] 10.244.2.2:43102 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00010457s
	[INFO] 10.244.2.2:36141 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000183057s
	[INFO] 10.244.1.2:35260 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014156s
	[INFO] 10.244.1.2:40737 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00207375s
	[INFO] 10.244.1.2:34377 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000109225s
	[INFO] 10.244.1.2:48194 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096468s
	[INFO] 10.244.1.2:53649 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000092891s
	[INFO] 10.244.0.4:39691 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000126403s
	[INFO] 10.244.0.4:59011 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094158s
	[INFO] 10.244.2.2:46754 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133215s
	[INFO] 10.244.1.2:44424 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000161779s
	[INFO] 10.244.1.2:36322 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010124s
	[INFO] 10.244.0.4:56787 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000305054s
	[INFO] 10.244.2.2:56511 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168323s
	[INFO] 10.244.2.2:35510 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000291052s
	[INFO] 10.244.2.2:56208 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000174753s
	[INFO] 10.244.1.2:41964 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000119677s
	
	
	==> describe nodes <==
	Name:               ha-450021
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-450021
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=ha-450021
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_14T13_55_00_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 13:54:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-450021
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 14:01:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 13:58:03 +0000   Mon, 14 Oct 2024 13:54:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 13:58:03 +0000   Mon, 14 Oct 2024 13:54:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 13:58:03 +0000   Mon, 14 Oct 2024 13:54:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 13:58:03 +0000   Mon, 14 Oct 2024 13:55:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.176
	  Hostname:    ha-450021
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0546a3427732401daacd4235ad46d465
	  System UUID:                0546a342-7732-401d-aacd-4235ad46d465
	  Boot ID:                    19dd080e-b9f2-467d-b5f2-41dbb07e1880
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-fkz82              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 coredns-7c65d6cfc9-btfml             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m10s
	  kube-system                 coredns-7c65d6cfc9-h5s6h             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m10s
	  kube-system                 etcd-ha-450021                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m15s
	  kube-system                 kindnet-c2xkn                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m10s
	  kube-system                 kube-apiserver-ha-450021             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-controller-manager-ha-450021    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-proxy-dmbpv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 kube-scheduler-ha-450021             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-vip-ha-450021                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m9s                   kube-proxy       
	  Normal  NodeHasSufficientPID     6m22s (x7 over 6m22s)  kubelet          Node ha-450021 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m22s (x8 over 6m22s)  kubelet          Node ha-450021 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m22s (x8 over 6m22s)  kubelet          Node ha-450021 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m15s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m15s                  kubelet          Node ha-450021 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m15s                  kubelet          Node ha-450021 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m15s                  kubelet          Node ha-450021 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m11s                  node-controller  Node ha-450021 event: Registered Node ha-450021 in Controller
	  Normal  NodeReady                5m56s                  kubelet          Node ha-450021 status is now: NodeReady
	  Normal  RegisteredNode           5m9s                   node-controller  Node ha-450021 event: Registered Node ha-450021 in Controller
	  Normal  RegisteredNode           3m54s                  node-controller  Node ha-450021 event: Registered Node ha-450021 in Controller
	
	
	Name:               ha-450021-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-450021-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=ha-450021
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_14T13_55_59_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 13:55:56 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-450021-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 13:58:49 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 14 Oct 2024 13:57:58 +0000   Mon, 14 Oct 2024 13:59:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 14 Oct 2024 13:57:58 +0000   Mon, 14 Oct 2024 13:59:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 14 Oct 2024 13:57:58 +0000   Mon, 14 Oct 2024 13:59:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 14 Oct 2024 13:57:58 +0000   Mon, 14 Oct 2024 13:59:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.89
	  Hostname:    ha-450021-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a42e43dc14cb4b949c605bff9ac6e0d6
	  System UUID:                a42e43dc-14cb-4b94-9c60-5bff9ac6e0d6
	  Boot ID:                    479e9a18-0fa8-4366-8acf-af40a06156d6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nt6q5                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 etcd-ha-450021-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m17s
	  kube-system                 kindnet-2ghzc                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m18s
	  kube-system                 kube-apiserver-ha-450021-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m17s
	  kube-system                 kube-controller-manager-ha-450021-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m13s
	  kube-system                 kube-proxy-v24tf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 kube-scheduler-ha-450021-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 kube-vip-ha-450021-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m13s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m18s (x8 over 5m19s)  kubelet          Node ha-450021-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m18s (x8 over 5m19s)  kubelet          Node ha-450021-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m18s (x7 over 5m19s)  kubelet          Node ha-450021-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m16s                  node-controller  Node ha-450021-m02 event: Registered Node ha-450021-m02 in Controller
	  Normal  RegisteredNode           5m9s                   node-controller  Node ha-450021-m02 event: Registered Node ha-450021-m02 in Controller
	  Normal  RegisteredNode           3m54s                  node-controller  Node ha-450021-m02 event: Registered Node ha-450021-m02 in Controller
	  Normal  NodeNotReady             104s                   node-controller  Node ha-450021-m02 status is now: NodeNotReady
	
	
	Name:               ha-450021-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-450021-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=ha-450021
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_14T13_57_14_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 13:57:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-450021-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 14:01:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 13:57:40 +0000   Mon, 14 Oct 2024 13:57:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 13:57:40 +0000   Mon, 14 Oct 2024 13:57:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 13:57:40 +0000   Mon, 14 Oct 2024 13:57:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 13:57:40 +0000   Mon, 14 Oct 2024 13:57:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.55
	  Hostname:    ha-450021-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 50171e2610d047279285af0bf8eead91
	  System UUID:                50171e26-10d0-4727-9285-af0bf8eead91
	  Boot ID:                    7b6afcf4-f39b-41c1-92d6-cc1e18f2f3ff
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-lrvnn                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 etcd-ha-450021-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m2s
	  kube-system                 kindnet-7jwgx                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m4s
	  kube-system                 kube-apiserver-ha-450021-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m2s
	  kube-system                 kube-controller-manager-ha-450021-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 kube-proxy-9tbfp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-scheduler-ha-450021-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m3s
	  kube-system                 kube-vip-ha-450021-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m57s                kube-proxy       
	  Normal  NodeHasSufficientMemory  4m4s (x8 over 4m4s)  kubelet          Node ha-450021-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m4s (x8 over 4m4s)  kubelet          Node ha-450021-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m4s (x7 over 4m4s)  kubelet          Node ha-450021-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m4s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m1s                 node-controller  Node ha-450021-m03 event: Registered Node ha-450021-m03 in Controller
	  Normal  RegisteredNode           3m59s                node-controller  Node ha-450021-m03 event: Registered Node ha-450021-m03 in Controller
	  Normal  RegisteredNode           3m54s                node-controller  Node ha-450021-m03 event: Registered Node ha-450021-m03 in Controller
	
	
	Name:               ha-450021-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-450021-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=ha-450021
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_14T13_58_15_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 13:58:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-450021-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 14:01:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 13:58:45 +0000   Mon, 14 Oct 2024 13:58:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 13:58:45 +0000   Mon, 14 Oct 2024 13:58:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 13:58:45 +0000   Mon, 14 Oct 2024 13:58:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 13:58:45 +0000   Mon, 14 Oct 2024 13:58:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.127
	  Hostname:    ha-450021-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c8da54fea409461c84c103e8552a3553
	  System UUID:                c8da54fe-a409-461c-84c1-03e8552a3553
	  Boot ID:                    ed9b9ad9-a71a-4814-ae07-6cc1c2775deb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-478bj       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m
	  kube-system                 kube-proxy-2mfnd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m54s              kube-proxy       
	  Normal  NodeHasSufficientMemory  3m (x2 over 3m1s)  kubelet          Node ha-450021-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m (x2 over 3m1s)  kubelet          Node ha-450021-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m (x2 over 3m1s)  kubelet          Node ha-450021-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m59s              node-controller  Node ha-450021-m04 event: Registered Node ha-450021-m04 in Controller
	  Normal  RegisteredNode           2m59s              node-controller  Node ha-450021-m04 event: Registered Node ha-450021-m04 in Controller
	  Normal  RegisteredNode           2m56s              node-controller  Node ha-450021-m04 event: Registered Node ha-450021-m04 in Controller
	  Normal  NodeReady                2m42s              kubelet          Node ha-450021-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct14 13:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050735] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040529] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.861908] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.617931] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.603277] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.339591] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.056090] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067047] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.182956] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.129853] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.268814] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +3.909642] systemd-fstab-generator[746]: Ignoring "noauto" option for root device
	[  +4.099441] systemd-fstab-generator[876]: Ignoring "noauto" option for root device
	[  +0.067805] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.555395] systemd-fstab-generator[1292]: Ignoring "noauto" option for root device
	[  +0.098328] kauditd_printk_skb: 79 callbacks suppressed
	[Oct14 13:55] kauditd_printk_skb: 18 callbacks suppressed
	[ +14.850947] kauditd_printk_skb: 41 callbacks suppressed
	[Oct14 13:56] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [6ebec97dfd405a7e2c8ad77d0255ca029054cfb1090eba8d4d3851bdb68213e1] <==
	{"level":"warn","ts":"2024-10-14T14:01:14.149444Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:14.158958Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:14.159870Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:14.164099Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:14.178646Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:14.190018Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:14.197949Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:14.203098Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:14.208825Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:14.217503Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:14.224888Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:14.232145Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:14.237623Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:14.241335Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:14.249279Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:14.256426Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:14.259772Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:14.264538Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:14.270237Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:14.273694Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:14.277847Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:14.286764Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:14.294794Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:14.348548Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:14.360390Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 14:01:14 up 6 min,  0 users,  load average: 0.23, 0.21, 0.11
	Linux ha-450021 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b15af89d835eebb58d825b5cdfdcbcfc064fe27d95caa6667adfb0e714974996] <==
	I1014 14:00:38.792988       1 main.go:323] Node ha-450021-m03 has CIDR [10.244.2.0/24] 
	I1014 14:00:48.800745       1 main.go:296] Handling node with IPs: map[192.168.39.176:{}]
	I1014 14:00:48.800902       1 main.go:300] handling current node
	I1014 14:00:48.800939       1 main.go:296] Handling node with IPs: map[192.168.39.89:{}]
	I1014 14:00:48.801005       1 main.go:323] Node ha-450021-m02 has CIDR [10.244.1.0/24] 
	I1014 14:00:48.801390       1 main.go:296] Handling node with IPs: map[192.168.39.55:{}]
	I1014 14:00:48.801426       1 main.go:323] Node ha-450021-m03 has CIDR [10.244.2.0/24] 
	I1014 14:00:48.802111       1 main.go:296] Handling node with IPs: map[192.168.39.127:{}]
	I1014 14:00:48.802211       1 main.go:323] Node ha-450021-m04 has CIDR [10.244.3.0/24] 
	I1014 14:00:58.792229       1 main.go:296] Handling node with IPs: map[192.168.39.89:{}]
	I1014 14:00:58.792335       1 main.go:323] Node ha-450021-m02 has CIDR [10.244.1.0/24] 
	I1014 14:00:58.792702       1 main.go:296] Handling node with IPs: map[192.168.39.55:{}]
	I1014 14:00:58.792738       1 main.go:323] Node ha-450021-m03 has CIDR [10.244.2.0/24] 
	I1014 14:00:58.792927       1 main.go:296] Handling node with IPs: map[192.168.39.127:{}]
	I1014 14:00:58.793022       1 main.go:323] Node ha-450021-m04 has CIDR [10.244.3.0/24] 
	I1014 14:00:58.793206       1 main.go:296] Handling node with IPs: map[192.168.39.176:{}]
	I1014 14:00:58.793233       1 main.go:300] handling current node
	I1014 14:01:08.792774       1 main.go:296] Handling node with IPs: map[192.168.39.127:{}]
	I1014 14:01:08.792894       1 main.go:323] Node ha-450021-m04 has CIDR [10.244.3.0/24] 
	I1014 14:01:08.793209       1 main.go:296] Handling node with IPs: map[192.168.39.176:{}]
	I1014 14:01:08.793270       1 main.go:300] handling current node
	I1014 14:01:08.793308       1 main.go:296] Handling node with IPs: map[192.168.39.89:{}]
	I1014 14:01:08.793385       1 main.go:323] Node ha-450021-m02 has CIDR [10.244.1.0/24] 
	I1014 14:01:08.793725       1 main.go:296] Handling node with IPs: map[192.168.39.55:{}]
	I1014 14:01:08.793788       1 main.go:323] Node ha-450021-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [942c179e591a9c0a8a1d869cfc5456dcbfb37c78056f256b241c51aab8936a3e] <==
	I1014 13:54:59.598140       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1014 13:54:59.663013       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1014 13:54:59.717856       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1014 13:55:03.816892       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1014 13:55:04.117644       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1014 13:55:56.847231       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1014 13:55:56.847740       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 10.384µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1014 13:55:56.849144       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1014 13:55:56.850518       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1014 13:55:56.851864       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.726003ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E1014 13:57:40.356093       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42006: use of closed network connection
	E1014 13:57:40.548948       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42022: use of closed network connection
	E1014 13:57:40.734061       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42040: use of closed network connection
	E1014 13:57:40.931904       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42056: use of closed network connection
	E1014 13:57:41.132089       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42064: use of closed network connection
	E1014 13:57:41.311104       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42080: use of closed network connection
	E1014 13:57:41.483753       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42086: use of closed network connection
	E1014 13:57:41.673306       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42104: use of closed network connection
	E1014 13:57:41.861924       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41084: use of closed network connection
	E1014 13:57:42.155414       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41118: use of closed network connection
	E1014 13:57:42.326032       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41138: use of closed network connection
	E1014 13:57:42.498111       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41150: use of closed network connection
	E1014 13:57:42.666091       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41168: use of closed network connection
	E1014 13:57:42.837965       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41180: use of closed network connection
	E1014 13:57:43.032348       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41204: use of closed network connection
	
	
	==> kube-controller-manager [09fbfff3b334bde93db2f81855492434f8be70767826f2e33734ab52ad522a7a] <==
	I1014 13:58:14.814158       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:14.814232       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	E1014 13:58:14.983101       1 daemon_controller.go:329] "Unhandled Error" err="kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kindnet\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"131c0255-c34c-4638-a6ae-c00d282c1fc8\", ResourceVersion:\"944\", Generation:1, CreationTimestamp:time.Date(2024, time.October, 14, 13, 55, 0, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\", \"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"},\\\"name\\\":\\\"kindnet\\\"
,\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"app\\\":\\\"kindnet\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"env\\\":[{\\\"name\\\":\\\"HOST_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.hostIP\\\"}}},{\\\"name\\\":\\\"POD_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.podIP\\\"}}},{\\\"name\\\":\\\"POD_SUBNET\\\",\\\"value\\\":\\\"10.244.0.0/16\\\"}],\\\"image\\\":\\\"docker.io/kindest/kindnetd:v20241007-36f62932\\\",\\\"name\\\":\\\"kindnet-cni\\\",\\\"resources\\\":{\\\"limits\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"capabilities\\\":{\\\"add\\\":[\\\"NET_RAW\\\",\\\"NET_ADMIN\\\"]},\\\"privileged\\\":false},\\\"volumeMounts\\\":[{\\\"mountPath\\\
":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"cni-cfg\\\"},{\\\"mountPath\\\":\\\"/run/xtables.lock\\\",\\\"name\\\":\\\"xtables-lock\\\",\\\"readOnly\\\":false},{\\\"mountPath\\\":\\\"/lib/modules\\\",\\\"name\\\":\\\"lib-modules\\\",\\\"readOnly\\\":true}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"kindnet\\\",\\\"tolerations\\\":[{\\\"effect\\\":\\\"NoSchedule\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/etc/cni/net.d\\\",\\\"type\\\":\\\"DirectoryOrCreate\\\"},\\\"name\\\":\\\"cni-cfg\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/run/xtables.lock\\\",\\\"type\\\":\\\"FileOrCreate\\\"},\\\"name\\\":\\\"xtables-lock\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/lib/modules\\\"},\\\"name\\\":\\\"lib-modules\\\"}]}}}}\\n\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000d57240), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"
\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"cni-cfg\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00075b248), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeC
laimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00075b260), EmptyDir:(*v1.EmptyDirVolumeSource)(
nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxV
olumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00075b278), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), Azu
reFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kindnet-cni\", Image:\"docker.io/kindest/kindnetd:v20241007-36f62932\", Command:[]string(nil), Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"HOST_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc000d57280)}, v1.EnvVar{Name:\"POD_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSo
urce)(0xc000d57300)}, v1.EnvVar{Name:\"POD_SUBNET\", Value:\"10.244.0.0/16\", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Requests:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"cni-cfg\", ReadOnly:false
, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/etc/cni/net.d\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc001b502a0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralConta
iner(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc001820428), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string(nil), ServiceAccountName:\"kindnet\", DeprecatedServiceAccount:\"kindnet\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001d51480), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"NoSchedule\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Ov
erhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001e15e60)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001820470)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:3, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:3, ObservedGeneration:1, UpdatedNumberScheduled:3, NumberAvailable:3, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kindnet\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1014 13:58:14.983373       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:15.178688       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:15.243657       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:15.340286       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:15.399942       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:18.263248       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:18.263850       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-450021-m04"
	I1014 13:58:18.322338       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:24.991672       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:32.758209       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:32.758699       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-450021-m04"
	I1014 13:58:32.779681       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:33.281205       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:45.471689       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:59:30.147306       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-450021-m04"
	I1014 13:59:30.148143       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m02"
	I1014 13:59:30.170693       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m02"
	I1014 13:59:30.349046       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="34.558914ms"
	I1014 13:59:30.349473       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="165.118µs"
	I1014 13:59:33.404625       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m02"
	I1014 13:59:35.409214       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m02"
	
	
	==> kube-proxy [5eec863af38c114b5058f678da27f8ce8608a5cd97566d4e704e07ff87100124] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1014 13:55:05.027976       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1014 13:55:05.042612       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.176"]
	E1014 13:55:05.042701       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 13:55:05.077520       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1014 13:55:05.077626       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 13:55:05.077653       1 server_linux.go:169] "Using iptables Proxier"
	I1014 13:55:05.080947       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 13:55:05.081416       1 server.go:483] "Version info" version="v1.31.1"
	I1014 13:55:05.081449       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 13:55:05.084048       1 config.go:199] "Starting service config controller"
	I1014 13:55:05.084244       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 13:55:05.084407       1 config.go:105] "Starting endpoint slice config controller"
	I1014 13:55:05.084429       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 13:55:05.085497       1 config.go:328] "Starting node config controller"
	I1014 13:55:05.085525       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 13:55:05.185149       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1014 13:55:05.185195       1 shared_informer.go:320] Caches are synced for service config
	I1014 13:55:05.185638       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4efae268f9ec331abbf180a9264d60144b2a22485b89d39a46207f1c40454221] <==
	W1014 13:54:57.431755       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1014 13:54:57.431801       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 13:54:57.619315       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1014 13:54:57.619367       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 13:54:57.631913       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1014 13:54:57.632033       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1014 13:54:57.666200       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1014 13:54:57.666268       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 13:54:57.675854       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1014 13:54:57.675918       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 13:54:57.682854       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1014 13:54:57.683283       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1014 13:54:57.820025       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1014 13:54:57.820087       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 13:55:00.246826       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1014 13:57:36.278433       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-fkz82\": pod busybox-7dff88458-fkz82 is already assigned to node \"ha-450021\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-fkz82" node="ha-450021"
	E1014 13:57:36.278688       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 07dccd61-4a5a-4d82-ba70-df7e6ff6bb4c(default/busybox-7dff88458-fkz82) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-fkz82"
	E1014 13:57:36.278737       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-fkz82\": pod busybox-7dff88458-fkz82 is already assigned to node \"ha-450021\"" pod="default/busybox-7dff88458-fkz82"
	I1014 13:57:36.278788       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-fkz82" node="ha-450021"
	E1014 13:57:36.279144       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lrvnn\": pod busybox-7dff88458-lrvnn is already assigned to node \"ha-450021-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-lrvnn" node="ha-450021-m03"
	E1014 13:57:36.279201       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c0e6c9da-2bbd-4814-9310-ab74d5a3e09d(default/busybox-7dff88458-lrvnn) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-lrvnn"
	E1014 13:57:36.279240       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lrvnn\": pod busybox-7dff88458-lrvnn is already assigned to node \"ha-450021-m03\"" pod="default/busybox-7dff88458-lrvnn"
	I1014 13:57:36.279273       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-lrvnn" node="ha-450021-m03"
	E1014 13:58:14.867309       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-2mfnd\": pod kube-proxy-2mfnd is already assigned to node \"ha-450021-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-2mfnd" node="ha-450021-m04"
	E1014 13:58:14.867404       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-2mfnd\": pod kube-proxy-2mfnd is already assigned to node \"ha-450021-m04\"" pod="kube-system/kube-proxy-2mfnd"
	
	
	==> kubelet <==
	Oct 14 13:59:59 ha-450021 kubelet[1299]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 13:59:59 ha-450021 kubelet[1299]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 13:59:59 ha-450021 kubelet[1299]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 13:59:59 ha-450021 kubelet[1299]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 13:59:59 ha-450021 kubelet[1299]: E1014 13:59:59.850190    1299 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914399849941739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:59:59 ha-450021 kubelet[1299]: E1014 13:59:59.850218    1299 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914399849941739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:09 ha-450021 kubelet[1299]: E1014 14:00:09.852474    1299 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914409852112835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:09 ha-450021 kubelet[1299]: E1014 14:00:09.852527    1299 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914409852112835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:19 ha-450021 kubelet[1299]: E1014 14:00:19.856761    1299 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914419856453814,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:19 ha-450021 kubelet[1299]: E1014 14:00:19.856806    1299 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914419856453814,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:29 ha-450021 kubelet[1299]: E1014 14:00:29.858206    1299 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914429857922237,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:29 ha-450021 kubelet[1299]: E1014 14:00:29.858470    1299 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914429857922237,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:39 ha-450021 kubelet[1299]: E1014 14:00:39.861764    1299 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914439861102356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:39 ha-450021 kubelet[1299]: E1014 14:00:39.861870    1299 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914439861102356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:49 ha-450021 kubelet[1299]: E1014 14:00:49.864513    1299 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914449864091872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:49 ha-450021 kubelet[1299]: E1014 14:00:49.864550    1299 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914449864091872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:59 ha-450021 kubelet[1299]: E1014 14:00:59.724357    1299 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:00:59 ha-450021 kubelet[1299]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:00:59 ha-450021 kubelet[1299]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:00:59 ha-450021 kubelet[1299]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:00:59 ha-450021 kubelet[1299]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 14:00:59 ha-450021 kubelet[1299]: E1014 14:00:59.866616    1299 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914459866140857,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:59 ha-450021 kubelet[1299]: E1014 14:00:59.866661    1299 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914459866140857,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:01:09 ha-450021 kubelet[1299]: E1014 14:01:09.869535    1299 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914469868732835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:01:09 ha-450021 kubelet[1299]: E1014 14:01:09.869642    1299 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914469868732835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-450021 -n ha-450021
helpers_test.go:261: (dbg) Run:  kubectl --context ha-450021 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.396361669s)
ha_test.go:415: expected profile "ha-450021" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-450021\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-450021\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-450021\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.176\",\"Port\":8443,\"Kube
rnetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.89\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.55\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.127\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevir
t\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\"
,\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-450021 -n ha-450021
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-450021 logs -n 25: (1.426724737s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-450021 cp ha-450021-m03:/home/docker/cp-test.txt                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3029314565/001/cp-test_ha-450021-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-450021 cp ha-450021-m03:/home/docker/cp-test.txt                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021:/home/docker/cp-test_ha-450021-m03_ha-450021.txt                       |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n ha-450021 sudo cat                                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /home/docker/cp-test_ha-450021-m03_ha-450021.txt                                 |           |         |         |                     |                     |
	| cp      | ha-450021 cp ha-450021-m03:/home/docker/cp-test.txt                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m02:/home/docker/cp-test_ha-450021-m03_ha-450021-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n ha-450021-m02 sudo cat                                          | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /home/docker/cp-test_ha-450021-m03_ha-450021-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-450021 cp ha-450021-m03:/home/docker/cp-test.txt                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04:/home/docker/cp-test_ha-450021-m03_ha-450021-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n ha-450021-m04 sudo cat                                          | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /home/docker/cp-test_ha-450021-m03_ha-450021-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-450021 cp testdata/cp-test.txt                                                | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-450021 cp ha-450021-m04:/home/docker/cp-test.txt                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3029314565/001/cp-test_ha-450021-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-450021 cp ha-450021-m04:/home/docker/cp-test.txt                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021:/home/docker/cp-test_ha-450021-m04_ha-450021.txt                       |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n ha-450021 sudo cat                                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /home/docker/cp-test_ha-450021-m04_ha-450021.txt                                 |           |         |         |                     |                     |
	| cp      | ha-450021 cp ha-450021-m04:/home/docker/cp-test.txt                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m02:/home/docker/cp-test_ha-450021-m04_ha-450021-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n ha-450021-m02 sudo cat                                          | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /home/docker/cp-test_ha-450021-m04_ha-450021-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-450021 cp ha-450021-m04:/home/docker/cp-test.txt                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m03:/home/docker/cp-test_ha-450021-m04_ha-450021-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n ha-450021-m03 sudo cat                                          | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /home/docker/cp-test_ha-450021-m04_ha-450021-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-450021 node stop m02 -v=7                                                     | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 13:54:19
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 13:54:19.812271   25306 out.go:345] Setting OutFile to fd 1 ...
	I1014 13:54:19.812610   25306 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:54:19.812625   25306 out.go:358] Setting ErrFile to fd 2...
	I1014 13:54:19.812632   25306 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:54:19.813049   25306 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
	I1014 13:54:19.813610   25306 out.go:352] Setting JSON to false
	I1014 13:54:19.814483   25306 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2210,"bootTime":1728911850,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 13:54:19.814571   25306 start.go:139] virtualization: kvm guest
	I1014 13:54:19.816884   25306 out.go:177] * [ha-450021] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1014 13:54:19.818710   25306 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 13:54:19.818708   25306 notify.go:220] Checking for updates...
	I1014 13:54:19.821425   25306 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 13:54:19.822777   25306 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 13:54:19.824007   25306 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 13:54:19.825232   25306 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 13:54:19.826443   25306 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 13:54:19.827738   25306 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 13:54:19.861394   25306 out.go:177] * Using the kvm2 driver based on user configuration
	I1014 13:54:19.862707   25306 start.go:297] selected driver: kvm2
	I1014 13:54:19.862720   25306 start.go:901] validating driver "kvm2" against <nil>
	I1014 13:54:19.862734   25306 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 13:54:19.863393   25306 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 13:54:19.863486   25306 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19790-7836/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 13:54:19.878143   25306 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1014 13:54:19.878185   25306 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 13:54:19.878407   25306 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 13:54:19.878437   25306 cni.go:84] Creating CNI manager for ""
	I1014 13:54:19.878478   25306 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1014 13:54:19.878486   25306 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 13:54:19.878530   25306 start.go:340] cluster config:
	{Name:ha-450021 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-450021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1014 13:54:19.878657   25306 iso.go:125] acquiring lock: {Name:mk2e2e780b05ead4007a93e6b56d28c6081926bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 13:54:19.881226   25306 out.go:177] * Starting "ha-450021" primary control-plane node in "ha-450021" cluster
	I1014 13:54:19.882326   25306 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 13:54:19.882357   25306 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1014 13:54:19.882366   25306 cache.go:56] Caching tarball of preloaded images
	I1014 13:54:19.882441   25306 preload.go:172] Found /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 13:54:19.882451   25306 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1014 13:54:19.882789   25306 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/config.json ...
	I1014 13:54:19.882811   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/config.json: {Name:mk7e7a81dd8e8c0d913c7421cc0d458f1e8a36b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:54:19.882936   25306 start.go:360] acquireMachinesLock for ha-450021: {Name:mk0b928e3a7c606d1470b2064477a22c5e59969d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 13:54:19.882963   25306 start.go:364] duration metric: took 16.489µs to acquireMachinesLock for "ha-450021"
	I1014 13:54:19.882982   25306 start.go:93] Provisioning new machine with config: &{Name:ha-450021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-450021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 13:54:19.883029   25306 start.go:125] createHost starting for "" (driver="kvm2")
	I1014 13:54:19.884643   25306 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 13:54:19.884761   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:54:19.884802   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:54:19.899595   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35743
	I1014 13:54:19.900085   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:54:19.900603   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:54:19.900622   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:54:19.900928   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:54:19.901089   25306 main.go:141] libmachine: (ha-450021) Calling .GetMachineName
	I1014 13:54:19.901224   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:54:19.901350   25306 start.go:159] libmachine.API.Create for "ha-450021" (driver="kvm2")
	I1014 13:54:19.901382   25306 client.go:168] LocalClient.Create starting
	I1014 13:54:19.901414   25306 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem
	I1014 13:54:19.901441   25306 main.go:141] libmachine: Decoding PEM data...
	I1014 13:54:19.901454   25306 main.go:141] libmachine: Parsing certificate...
	I1014 13:54:19.901498   25306 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem
	I1014 13:54:19.901515   25306 main.go:141] libmachine: Decoding PEM data...
	I1014 13:54:19.901544   25306 main.go:141] libmachine: Parsing certificate...
	I1014 13:54:19.901570   25306 main.go:141] libmachine: Running pre-create checks...
	I1014 13:54:19.901582   25306 main.go:141] libmachine: (ha-450021) Calling .PreCreateCheck
	I1014 13:54:19.901916   25306 main.go:141] libmachine: (ha-450021) Calling .GetConfigRaw
	I1014 13:54:19.902252   25306 main.go:141] libmachine: Creating machine...
	I1014 13:54:19.902264   25306 main.go:141] libmachine: (ha-450021) Calling .Create
	I1014 13:54:19.902384   25306 main.go:141] libmachine: (ha-450021) Creating KVM machine...
	I1014 13:54:19.903685   25306 main.go:141] libmachine: (ha-450021) DBG | found existing default KVM network
	I1014 13:54:19.904369   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:19.904236   25330 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211e0}
	I1014 13:54:19.904404   25306 main.go:141] libmachine: (ha-450021) DBG | created network xml: 
	I1014 13:54:19.904424   25306 main.go:141] libmachine: (ha-450021) DBG | <network>
	I1014 13:54:19.904433   25306 main.go:141] libmachine: (ha-450021) DBG |   <name>mk-ha-450021</name>
	I1014 13:54:19.904439   25306 main.go:141] libmachine: (ha-450021) DBG |   <dns enable='no'/>
	I1014 13:54:19.904447   25306 main.go:141] libmachine: (ha-450021) DBG |   
	I1014 13:54:19.904459   25306 main.go:141] libmachine: (ha-450021) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1014 13:54:19.904466   25306 main.go:141] libmachine: (ha-450021) DBG |     <dhcp>
	I1014 13:54:19.904474   25306 main.go:141] libmachine: (ha-450021) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1014 13:54:19.904486   25306 main.go:141] libmachine: (ha-450021) DBG |     </dhcp>
	I1014 13:54:19.904496   25306 main.go:141] libmachine: (ha-450021) DBG |   </ip>
	I1014 13:54:19.904507   25306 main.go:141] libmachine: (ha-450021) DBG |   
	I1014 13:54:19.904513   25306 main.go:141] libmachine: (ha-450021) DBG | </network>
	I1014 13:54:19.904522   25306 main.go:141] libmachine: (ha-450021) DBG | 
	I1014 13:54:19.910040   25306 main.go:141] libmachine: (ha-450021) DBG | trying to create private KVM network mk-ha-450021 192.168.39.0/24...
	I1014 13:54:19.971833   25306 main.go:141] libmachine: (ha-450021) DBG | private KVM network mk-ha-450021 192.168.39.0/24 created
	I1014 13:54:19.971862   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:19.971805   25330 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 13:54:19.971874   25306 main.go:141] libmachine: (ha-450021) Setting up store path in /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021 ...
	I1014 13:54:19.971891   25306 main.go:141] libmachine: (ha-450021) Building disk image from file:///home/jenkins/minikube-integration/19790-7836/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1014 13:54:19.971967   25306 main.go:141] libmachine: (ha-450021) Downloading /home/jenkins/minikube-integration/19790-7836/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19790-7836/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1014 13:54:20.214152   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:20.214048   25330 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa...
	I1014 13:54:20.270347   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:20.270208   25330 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/ha-450021.rawdisk...
	I1014 13:54:20.270384   25306 main.go:141] libmachine: (ha-450021) DBG | Writing magic tar header
	I1014 13:54:20.270399   25306 main.go:141] libmachine: (ha-450021) DBG | Writing SSH key tar header
	I1014 13:54:20.270411   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:20.270359   25330 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021 ...
	I1014 13:54:20.270469   25306 main.go:141] libmachine: (ha-450021) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021
	I1014 13:54:20.270577   25306 main.go:141] libmachine: (ha-450021) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021 (perms=drwx------)
	I1014 13:54:20.270629   25306 main.go:141] libmachine: (ha-450021) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube/machines (perms=drwxr-xr-x)
	I1014 13:54:20.270649   25306 main.go:141] libmachine: (ha-450021) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube/machines
	I1014 13:54:20.270663   25306 main.go:141] libmachine: (ha-450021) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 13:54:20.270676   25306 main.go:141] libmachine: (ha-450021) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube (perms=drwxr-xr-x)
	I1014 13:54:20.270690   25306 main.go:141] libmachine: (ha-450021) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836 (perms=drwxrwxr-x)
	I1014 13:54:20.270697   25306 main.go:141] libmachine: (ha-450021) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1014 13:54:20.270707   25306 main.go:141] libmachine: (ha-450021) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1014 13:54:20.270716   25306 main.go:141] libmachine: (ha-450021) Creating domain...
	I1014 13:54:20.270725   25306 main.go:141] libmachine: (ha-450021) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836
	I1014 13:54:20.270732   25306 main.go:141] libmachine: (ha-450021) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1014 13:54:20.270758   25306 main.go:141] libmachine: (ha-450021) DBG | Checking permissions on dir: /home/jenkins
	I1014 13:54:20.270778   25306 main.go:141] libmachine: (ha-450021) DBG | Checking permissions on dir: /home
	I1014 13:54:20.270791   25306 main.go:141] libmachine: (ha-450021) DBG | Skipping /home - not owner
	I1014 13:54:20.271873   25306 main.go:141] libmachine: (ha-450021) define libvirt domain using xml: 
	I1014 13:54:20.271895   25306 main.go:141] libmachine: (ha-450021) <domain type='kvm'>
	I1014 13:54:20.271904   25306 main.go:141] libmachine: (ha-450021)   <name>ha-450021</name>
	I1014 13:54:20.271909   25306 main.go:141] libmachine: (ha-450021)   <memory unit='MiB'>2200</memory>
	I1014 13:54:20.271915   25306 main.go:141] libmachine: (ha-450021)   <vcpu>2</vcpu>
	I1014 13:54:20.271922   25306 main.go:141] libmachine: (ha-450021)   <features>
	I1014 13:54:20.271942   25306 main.go:141] libmachine: (ha-450021)     <acpi/>
	I1014 13:54:20.271950   25306 main.go:141] libmachine: (ha-450021)     <apic/>
	I1014 13:54:20.271956   25306 main.go:141] libmachine: (ha-450021)     <pae/>
	I1014 13:54:20.271997   25306 main.go:141] libmachine: (ha-450021)     
	I1014 13:54:20.272026   25306 main.go:141] libmachine: (ha-450021)   </features>
	I1014 13:54:20.272048   25306 main.go:141] libmachine: (ha-450021)   <cpu mode='host-passthrough'>
	I1014 13:54:20.272058   25306 main.go:141] libmachine: (ha-450021)   
	I1014 13:54:20.272070   25306 main.go:141] libmachine: (ha-450021)   </cpu>
	I1014 13:54:20.272081   25306 main.go:141] libmachine: (ha-450021)   <os>
	I1014 13:54:20.272089   25306 main.go:141] libmachine: (ha-450021)     <type>hvm</type>
	I1014 13:54:20.272100   25306 main.go:141] libmachine: (ha-450021)     <boot dev='cdrom'/>
	I1014 13:54:20.272132   25306 main.go:141] libmachine: (ha-450021)     <boot dev='hd'/>
	I1014 13:54:20.272144   25306 main.go:141] libmachine: (ha-450021)     <bootmenu enable='no'/>
	I1014 13:54:20.272150   25306 main.go:141] libmachine: (ha-450021)   </os>
	I1014 13:54:20.272158   25306 main.go:141] libmachine: (ha-450021)   <devices>
	I1014 13:54:20.272173   25306 main.go:141] libmachine: (ha-450021)     <disk type='file' device='cdrom'>
	I1014 13:54:20.272188   25306 main.go:141] libmachine: (ha-450021)       <source file='/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/boot2docker.iso'/>
	I1014 13:54:20.272198   25306 main.go:141] libmachine: (ha-450021)       <target dev='hdc' bus='scsi'/>
	I1014 13:54:20.272208   25306 main.go:141] libmachine: (ha-450021)       <readonly/>
	I1014 13:54:20.272217   25306 main.go:141] libmachine: (ha-450021)     </disk>
	I1014 13:54:20.272224   25306 main.go:141] libmachine: (ha-450021)     <disk type='file' device='disk'>
	I1014 13:54:20.272233   25306 main.go:141] libmachine: (ha-450021)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1014 13:54:20.272252   25306 main.go:141] libmachine: (ha-450021)       <source file='/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/ha-450021.rawdisk'/>
	I1014 13:54:20.272267   25306 main.go:141] libmachine: (ha-450021)       <target dev='hda' bus='virtio'/>
	I1014 13:54:20.272277   25306 main.go:141] libmachine: (ha-450021)     </disk>
	I1014 13:54:20.272287   25306 main.go:141] libmachine: (ha-450021)     <interface type='network'>
	I1014 13:54:20.272303   25306 main.go:141] libmachine: (ha-450021)       <source network='mk-ha-450021'/>
	I1014 13:54:20.272315   25306 main.go:141] libmachine: (ha-450021)       <model type='virtio'/>
	I1014 13:54:20.272323   25306 main.go:141] libmachine: (ha-450021)     </interface>
	I1014 13:54:20.272332   25306 main.go:141] libmachine: (ha-450021)     <interface type='network'>
	I1014 13:54:20.272356   25306 main.go:141] libmachine: (ha-450021)       <source network='default'/>
	I1014 13:54:20.272378   25306 main.go:141] libmachine: (ha-450021)       <model type='virtio'/>
	I1014 13:54:20.272390   25306 main.go:141] libmachine: (ha-450021)     </interface>
	I1014 13:54:20.272397   25306 main.go:141] libmachine: (ha-450021)     <serial type='pty'>
	I1014 13:54:20.272402   25306 main.go:141] libmachine: (ha-450021)       <target port='0'/>
	I1014 13:54:20.272409   25306 main.go:141] libmachine: (ha-450021)     </serial>
	I1014 13:54:20.272414   25306 main.go:141] libmachine: (ha-450021)     <console type='pty'>
	I1014 13:54:20.272421   25306 main.go:141] libmachine: (ha-450021)       <target type='serial' port='0'/>
	I1014 13:54:20.272426   25306 main.go:141] libmachine: (ha-450021)     </console>
	I1014 13:54:20.272433   25306 main.go:141] libmachine: (ha-450021)     <rng model='virtio'>
	I1014 13:54:20.272442   25306 main.go:141] libmachine: (ha-450021)       <backend model='random'>/dev/random</backend>
	I1014 13:54:20.272449   25306 main.go:141] libmachine: (ha-450021)     </rng>
	I1014 13:54:20.272464   25306 main.go:141] libmachine: (ha-450021)     
	I1014 13:54:20.272479   25306 main.go:141] libmachine: (ha-450021)     
	I1014 13:54:20.272490   25306 main.go:141] libmachine: (ha-450021)   </devices>
	I1014 13:54:20.272499   25306 main.go:141] libmachine: (ha-450021) </domain>
	I1014 13:54:20.272508   25306 main.go:141] libmachine: (ha-450021) 
	I1014 13:54:20.276743   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:57:d6:54 in network default
	I1014 13:54:20.277233   25306 main.go:141] libmachine: (ha-450021) Ensuring networks are active...
	I1014 13:54:20.277256   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:20.277849   25306 main.go:141] libmachine: (ha-450021) Ensuring network default is active
	I1014 13:54:20.278100   25306 main.go:141] libmachine: (ha-450021) Ensuring network mk-ha-450021 is active
	I1014 13:54:20.278557   25306 main.go:141] libmachine: (ha-450021) Getting domain xml...
	I1014 13:54:20.279179   25306 main.go:141] libmachine: (ha-450021) Creating domain...
	I1014 13:54:21.462335   25306 main.go:141] libmachine: (ha-450021) Waiting to get IP...
	I1014 13:54:21.463069   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:21.463429   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:21.463469   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:21.463416   25330 retry.go:31] will retry after 252.896893ms: waiting for machine to come up
	I1014 13:54:21.717838   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:21.718276   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:21.718307   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:21.718253   25330 retry.go:31] will retry after 323.417298ms: waiting for machine to come up
	I1014 13:54:22.043653   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:22.044089   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:22.044113   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:22.044049   25330 retry.go:31] will retry after 429.247039ms: waiting for machine to come up
	I1014 13:54:22.474550   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:22.475007   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:22.475032   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:22.474972   25330 retry.go:31] will retry after 584.602082ms: waiting for machine to come up
	I1014 13:54:23.060636   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:23.061070   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:23.061096   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:23.061025   25330 retry.go:31] will retry after 757.618183ms: waiting for machine to come up
	I1014 13:54:23.819839   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:23.820349   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:23.820388   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:23.820305   25330 retry.go:31] will retry after 770.363721ms: waiting for machine to come up
	I1014 13:54:24.592151   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:24.592528   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:24.592563   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:24.592475   25330 retry.go:31] will retry after 746.543201ms: waiting for machine to come up
	I1014 13:54:25.340318   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:25.340826   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:25.340855   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:25.340782   25330 retry.go:31] will retry after 1.064448623s: waiting for machine to come up
	I1014 13:54:26.407039   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:26.407396   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:26.407443   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:26.407341   25330 retry.go:31] will retry after 1.702825811s: waiting for machine to come up
	I1014 13:54:28.112412   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:28.112812   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:28.112833   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:28.112771   25330 retry.go:31] will retry after 2.323768802s: waiting for machine to come up
	I1014 13:54:30.438077   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:30.438423   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:30.438463   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:30.438389   25330 retry.go:31] will retry after 2.882558658s: waiting for machine to come up
	I1014 13:54:33.324506   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:33.324987   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:33.325011   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:33.324949   25330 retry.go:31] will retry after 3.489582892s: waiting for machine to come up
	I1014 13:54:36.817112   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:36.817504   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:36.817523   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:36.817476   25330 retry.go:31] will retry after 4.118141928s: waiting for machine to come up
	I1014 13:54:40.937526   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:40.938020   25306 main.go:141] libmachine: (ha-450021) Found IP for machine: 192.168.39.176
	I1014 13:54:40.938039   25306 main.go:141] libmachine: (ha-450021) Reserving static IP address...
	I1014 13:54:40.938070   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has current primary IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:40.938454   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find host DHCP lease matching {name: "ha-450021", mac: "52:54:00:a1:20:5f", ip: "192.168.39.176"} in network mk-ha-450021
	I1014 13:54:41.006419   25306 main.go:141] libmachine: (ha-450021) DBG | Getting to WaitForSSH function...
	I1014 13:54:41.006450   25306 main.go:141] libmachine: (ha-450021) Reserved static IP address: 192.168.39.176
	I1014 13:54:41.006463   25306 main.go:141] libmachine: (ha-450021) Waiting for SSH to be available...
	I1014 13:54:41.008964   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.009322   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:41.009350   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.009443   25306 main.go:141] libmachine: (ha-450021) DBG | Using SSH client type: external
	I1014 13:54:41.009470   25306 main.go:141] libmachine: (ha-450021) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa (-rw-------)
	I1014 13:54:41.009582   25306 main.go:141] libmachine: (ha-450021) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.176 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 13:54:41.009598   25306 main.go:141] libmachine: (ha-450021) DBG | About to run SSH command:
	I1014 13:54:41.009610   25306 main.go:141] libmachine: (ha-450021) DBG | exit 0
	I1014 13:54:41.138539   25306 main.go:141] libmachine: (ha-450021) DBG | SSH cmd err, output: <nil>: 
	I1014 13:54:41.138806   25306 main.go:141] libmachine: (ha-450021) KVM machine creation complete!
	I1014 13:54:41.139099   25306 main.go:141] libmachine: (ha-450021) Calling .GetConfigRaw
	I1014 13:54:41.139669   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:54:41.139826   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:54:41.139970   25306 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1014 13:54:41.139983   25306 main.go:141] libmachine: (ha-450021) Calling .GetState
	I1014 13:54:41.141211   25306 main.go:141] libmachine: Detecting operating system of created instance...
	I1014 13:54:41.141221   25306 main.go:141] libmachine: Waiting for SSH to be available...
	I1014 13:54:41.141226   25306 main.go:141] libmachine: Getting to WaitForSSH function...
	I1014 13:54:41.141232   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:41.143400   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.143673   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:41.143693   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.143898   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:54:41.144069   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:41.144217   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:41.144390   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:54:41.144570   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:54:41.144741   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1014 13:54:41.144750   25306 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1014 13:54:41.257764   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 13:54:41.257787   25306 main.go:141] libmachine: Detecting the provisioner...
	I1014 13:54:41.257794   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:41.260355   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.260721   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:41.260755   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.260886   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:54:41.261058   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:41.261185   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:41.261349   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:54:41.261568   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:54:41.261770   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1014 13:54:41.261781   25306 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1014 13:54:41.387334   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1014 13:54:41.387407   25306 main.go:141] libmachine: found compatible host: buildroot
	I1014 13:54:41.387415   25306 main.go:141] libmachine: Provisioning with buildroot...
	I1014 13:54:41.387428   25306 main.go:141] libmachine: (ha-450021) Calling .GetMachineName
	I1014 13:54:41.387694   25306 buildroot.go:166] provisioning hostname "ha-450021"
	I1014 13:54:41.387742   25306 main.go:141] libmachine: (ha-450021) Calling .GetMachineName
	I1014 13:54:41.387887   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:41.390287   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.390677   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:41.390702   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.390836   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:54:41.391004   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:41.391122   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:41.391234   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:54:41.391358   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:54:41.391508   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1014 13:54:41.391518   25306 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-450021 && echo "ha-450021" | sudo tee /etc/hostname
	I1014 13:54:41.517186   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-450021
	
	I1014 13:54:41.517216   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:41.520093   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.520451   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:41.520480   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.520651   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:54:41.520827   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:41.520970   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:41.521077   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:54:41.521209   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:54:41.521391   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1014 13:54:41.521405   25306 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-450021' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-450021/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-450021' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 13:54:41.643685   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 13:54:41.643709   25306 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 13:54:41.643742   25306 buildroot.go:174] setting up certificates
	I1014 13:54:41.643754   25306 provision.go:84] configureAuth start
	I1014 13:54:41.643778   25306 main.go:141] libmachine: (ha-450021) Calling .GetMachineName
	I1014 13:54:41.644050   25306 main.go:141] libmachine: (ha-450021) Calling .GetIP
	I1014 13:54:41.646478   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.646878   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:41.646897   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.647059   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:41.648912   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.649213   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:41.649236   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.649373   25306 provision.go:143] copyHostCerts
	I1014 13:54:41.649402   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 13:54:41.649434   25306 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 13:54:41.649453   25306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 13:54:41.649515   25306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 13:54:41.649594   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 13:54:41.649617   25306 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 13:54:41.649623   25306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 13:54:41.649649   25306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 13:54:41.649688   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 13:54:41.649704   25306 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 13:54:41.649710   25306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 13:54:41.649730   25306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 13:54:41.649772   25306 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.ha-450021 san=[127.0.0.1 192.168.39.176 ha-450021 localhost minikube]
	I1014 13:54:41.997744   25306 provision.go:177] copyRemoteCerts
	I1014 13:54:41.997799   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 13:54:41.997817   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:42.000612   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.000903   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:42.000935   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.001075   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:54:42.001266   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:42.001429   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:54:42.001565   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 13:54:42.088827   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 13:54:42.088897   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 13:54:42.116095   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 13:54:42.116160   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 13:54:42.142757   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 13:54:42.142813   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1014 13:54:42.169537   25306 provision.go:87] duration metric: took 525.766906ms to configureAuth
	I1014 13:54:42.169566   25306 buildroot.go:189] setting minikube options for container-runtime
	I1014 13:54:42.169754   25306 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:54:42.169842   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:42.173229   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.174055   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:42.174080   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.174242   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:54:42.174429   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:42.174574   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:42.174715   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:54:42.174880   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:54:42.175029   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1014 13:54:42.175043   25306 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 13:54:42.406341   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 13:54:42.406376   25306 main.go:141] libmachine: Checking connection to Docker...
	I1014 13:54:42.406388   25306 main.go:141] libmachine: (ha-450021) Calling .GetURL
	I1014 13:54:42.407812   25306 main.go:141] libmachine: (ha-450021) DBG | Using libvirt version 6000000
	I1014 13:54:42.409824   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.410126   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:42.410157   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.410300   25306 main.go:141] libmachine: Docker is up and running!
	I1014 13:54:42.410319   25306 main.go:141] libmachine: Reticulating splines...
	I1014 13:54:42.410327   25306 client.go:171] duration metric: took 22.508934376s to LocalClient.Create
	I1014 13:54:42.410349   25306 start.go:167] duration metric: took 22.50900119s to libmachine.API.Create "ha-450021"
	I1014 13:54:42.410361   25306 start.go:293] postStartSetup for "ha-450021" (driver="kvm2")
	I1014 13:54:42.410370   25306 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 13:54:42.410386   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:54:42.410579   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 13:54:42.410619   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:42.412494   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.412776   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:42.412801   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.412917   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:54:42.413098   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:42.413204   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:54:42.413344   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 13:54:42.501187   25306 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 13:54:42.505548   25306 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 13:54:42.505573   25306 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 13:54:42.505640   25306 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 13:54:42.505739   25306 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 13:54:42.505751   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> /etc/ssl/certs/150232.pem
	I1014 13:54:42.505871   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 13:54:42.515100   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 13:54:42.540037   25306 start.go:296] duration metric: took 129.664961ms for postStartSetup
	I1014 13:54:42.540090   25306 main.go:141] libmachine: (ha-450021) Calling .GetConfigRaw
	I1014 13:54:42.540652   25306 main.go:141] libmachine: (ha-450021) Calling .GetIP
	I1014 13:54:42.543542   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.543870   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:42.543893   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.544115   25306 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/config.json ...
	I1014 13:54:42.544316   25306 start.go:128] duration metric: took 22.661278968s to createHost
	I1014 13:54:42.544340   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:42.546241   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.546584   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:42.546619   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.546735   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:54:42.546887   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:42.547016   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:42.547115   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:54:42.547241   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:54:42.547400   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1014 13:54:42.547410   25306 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 13:54:42.659258   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728914082.633821014
	
	I1014 13:54:42.659276   25306 fix.go:216] guest clock: 1728914082.633821014
	I1014 13:54:42.659283   25306 fix.go:229] Guest: 2024-10-14 13:54:42.633821014 +0000 UTC Remote: 2024-10-14 13:54:42.544328107 +0000 UTC m=+22.768041164 (delta=89.492907ms)
	I1014 13:54:42.659308   25306 fix.go:200] guest clock delta is within tolerance: 89.492907ms
	I1014 13:54:42.659315   25306 start.go:83] releasing machines lock for "ha-450021", held for 22.776339529s
	I1014 13:54:42.659340   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:54:42.659634   25306 main.go:141] libmachine: (ha-450021) Calling .GetIP
	I1014 13:54:42.662263   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.662566   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:42.662590   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.662762   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:54:42.663245   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:54:42.663382   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:54:42.663435   25306 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 13:54:42.663485   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:42.663584   25306 ssh_runner.go:195] Run: cat /version.json
	I1014 13:54:42.663609   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:42.665952   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.666140   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.666285   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:42.666310   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.666455   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:42.666478   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.666495   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:54:42.666715   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:54:42.666742   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:42.666851   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:54:42.666858   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:42.667031   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:54:42.667026   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 13:54:42.667128   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 13:54:42.747369   25306 ssh_runner.go:195] Run: systemctl --version
	I1014 13:54:42.781149   25306 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 13:54:42.939239   25306 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 13:54:42.945827   25306 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 13:54:42.945908   25306 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 13:54:42.961868   25306 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 13:54:42.961898   25306 start.go:495] detecting cgroup driver to use...
	I1014 13:54:42.961965   25306 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 13:54:42.979523   25306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 13:54:42.994309   25306 docker.go:217] disabling cri-docker service (if available) ...
	I1014 13:54:42.994364   25306 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 13:54:43.009231   25306 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 13:54:43.023792   25306 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 13:54:43.139525   25306 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 13:54:43.303272   25306 docker.go:233] disabling docker service ...
	I1014 13:54:43.303333   25306 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 13:54:43.318132   25306 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 13:54:43.331650   25306 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 13:54:43.447799   25306 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 13:54:43.574532   25306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 13:54:43.588882   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 13:54:43.606788   25306 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 13:54:43.606849   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:54:43.617065   25306 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 13:54:43.617138   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:54:43.627421   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:54:43.637692   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:54:43.648944   25306 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 13:54:43.659223   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:54:43.669296   25306 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:54:43.686887   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:54:43.697925   25306 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 13:54:43.707402   25306 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 13:54:43.707476   25306 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 13:54:43.720091   25306 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 13:54:43.729667   25306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:54:43.845781   25306 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 13:54:43.932782   25306 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 13:54:43.932868   25306 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 13:54:43.938172   25306 start.go:563] Will wait 60s for crictl version
	I1014 13:54:43.938228   25306 ssh_runner.go:195] Run: which crictl
	I1014 13:54:43.941774   25306 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 13:54:43.979317   25306 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 13:54:43.979415   25306 ssh_runner.go:195] Run: crio --version
	I1014 13:54:44.006952   25306 ssh_runner.go:195] Run: crio --version
	I1014 13:54:44.038472   25306 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1014 13:54:44.039762   25306 main.go:141] libmachine: (ha-450021) Calling .GetIP
	I1014 13:54:44.042304   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:44.042634   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:44.042661   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:44.042831   25306 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1014 13:54:44.046611   25306 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 13:54:44.059369   25306 kubeadm.go:883] updating cluster {Name:ha-450021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 13:54:44.059491   25306 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 13:54:44.059551   25306 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 13:54:44.090998   25306 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1014 13:54:44.091053   25306 ssh_runner.go:195] Run: which lz4
	I1014 13:54:44.094706   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1014 13:54:44.094776   25306 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 13:54:44.098775   25306 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 13:54:44.098800   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1014 13:54:45.421436   25306 crio.go:462] duration metric: took 1.326676583s to copy over tarball
	I1014 13:54:45.421513   25306 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 13:54:47.393636   25306 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.97209405s)
	I1014 13:54:47.393677   25306 crio.go:469] duration metric: took 1.97220742s to extract the tarball
	I1014 13:54:47.393687   25306 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 13:54:47.430848   25306 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 13:54:47.475174   25306 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 13:54:47.475197   25306 cache_images.go:84] Images are preloaded, skipping loading
	I1014 13:54:47.475204   25306 kubeadm.go:934] updating node { 192.168.39.176 8443 v1.31.1 crio true true} ...
	I1014 13:54:47.475299   25306 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-450021 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.176
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 13:54:47.475375   25306 ssh_runner.go:195] Run: crio config
	I1014 13:54:47.520162   25306 cni.go:84] Creating CNI manager for ""
	I1014 13:54:47.520183   25306 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 13:54:47.520192   25306 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 13:54:47.520214   25306 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.176 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-450021 NodeName:ha-450021 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.176"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.176 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 13:54:47.520316   25306 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.176
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-450021"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.176"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.176"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 13:54:47.520338   25306 kube-vip.go:115] generating kube-vip config ...
	I1014 13:54:47.520375   25306 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1014 13:54:47.537448   25306 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1014 13:54:47.537535   25306 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1014 13:54:47.537577   25306 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 13:54:47.551104   25306 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 13:54:47.551176   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1014 13:54:47.562687   25306 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1014 13:54:47.578926   25306 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 13:54:47.594827   25306 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1014 13:54:47.610693   25306 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1014 13:54:47.626695   25306 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1014 13:54:47.630338   25306 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 13:54:47.642280   25306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:54:47.756050   25306 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 13:54:47.773461   25306 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021 for IP: 192.168.39.176
	I1014 13:54:47.773484   25306 certs.go:194] generating shared ca certs ...
	I1014 13:54:47.773503   25306 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:54:47.773705   25306 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 13:54:47.773829   25306 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 13:54:47.773848   25306 certs.go:256] generating profile certs ...
	I1014 13:54:47.773913   25306 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.key
	I1014 13:54:47.773930   25306 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.crt with IP's: []
	I1014 13:54:48.113501   25306 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.crt ...
	I1014 13:54:48.113531   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.crt: {Name:mkbf9820119866d476b6914d2148d200b676c657 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:54:48.113715   25306 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.key ...
	I1014 13:54:48.113731   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.key: {Name:mk7d74bdc4633efc50efa47cc87ab000404cd20c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:54:48.113831   25306 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.1083e180
	I1014 13:54:48.113850   25306 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.1083e180 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.176 192.168.39.254]
	I1014 13:54:48.267925   25306 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.1083e180 ...
	I1014 13:54:48.267957   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.1083e180: {Name:mkd19ba2c223d25d9a0673db3befa3152f7a2c70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:54:48.268143   25306 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.1083e180 ...
	I1014 13:54:48.268160   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.1083e180: {Name:mkd725fc60a32f585bc691d5e3dd373c3c488835 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:54:48.268262   25306 certs.go:381] copying /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.1083e180 -> /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt
	I1014 13:54:48.268370   25306 certs.go:385] copying /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.1083e180 -> /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key
	I1014 13:54:48.268460   25306 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key
	I1014 13:54:48.268481   25306 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.crt with IP's: []
	I1014 13:54:48.434515   25306 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.crt ...
	I1014 13:54:48.434539   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.crt: {Name:mk37070511c0eff0f5c442e93060bbaddee85673 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:54:48.434689   25306 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key ...
	I1014 13:54:48.434700   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key: {Name:mk4252d17e842b88b135b952004ba8203bf67100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:54:48.434774   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 13:54:48.434791   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 13:54:48.434801   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 13:54:48.434813   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 13:54:48.434823   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 13:54:48.434833   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 13:54:48.434843   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 13:54:48.434854   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 13:54:48.434895   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 13:54:48.434936   25306 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 13:54:48.434945   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 13:54:48.434969   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 13:54:48.434990   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 13:54:48.435010   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 13:54:48.435044   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 13:54:48.435072   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> /usr/share/ca-certificates/150232.pem
	I1014 13:54:48.435084   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:54:48.435096   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem -> /usr/share/ca-certificates/15023.pem
	I1014 13:54:48.436322   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 13:54:48.461913   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 13:54:48.484404   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 13:54:48.506815   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 13:54:48.532871   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 13:54:48.555023   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 13:54:48.577102   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 13:54:48.599841   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 13:54:48.622100   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 13:54:48.644244   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 13:54:48.666067   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 13:54:48.688272   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 13:54:48.704452   25306 ssh_runner.go:195] Run: openssl version
	I1014 13:54:48.709950   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 13:54:48.720462   25306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:54:48.724736   25306 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:54:48.724786   25306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:54:48.730515   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 13:54:48.740926   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 13:54:48.751163   25306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 13:54:48.755136   25306 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 13:54:48.755173   25306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 13:54:48.760601   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 13:54:48.771042   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 13:54:48.781517   25306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 13:54:48.785721   25306 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 13:54:48.785757   25306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 13:54:48.791039   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 13:54:48.801295   25306 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 13:54:48.805300   25306 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 13:54:48.805353   25306 kubeadm.go:392] StartCluster: {Name:ha-450021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 13:54:48.805425   25306 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 13:54:48.805474   25306 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 13:54:48.846958   25306 cri.go:89] found id: ""
	I1014 13:54:48.847017   25306 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 13:54:48.856997   25306 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 13:54:48.866515   25306 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 13:54:48.876223   25306 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 13:54:48.876241   25306 kubeadm.go:157] found existing configuration files:
	
	I1014 13:54:48.876288   25306 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 13:54:48.885144   25306 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 13:54:48.885195   25306 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 13:54:48.894355   25306 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 13:54:48.902957   25306 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 13:54:48.903009   25306 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 13:54:48.912153   25306 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 13:54:48.921701   25306 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 13:54:48.921759   25306 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 13:54:48.931128   25306 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 13:54:48.939839   25306 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 13:54:48.939871   25306 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 13:54:48.948948   25306 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 13:54:49.168356   25306 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 13:55:00.103864   25306 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1014 13:55:00.103941   25306 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 13:55:00.104029   25306 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 13:55:00.104143   25306 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 13:55:00.104280   25306 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 13:55:00.104375   25306 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 13:55:00.106272   25306 out.go:235]   - Generating certificates and keys ...
	I1014 13:55:00.106362   25306 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 13:55:00.106429   25306 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 13:55:00.106511   25306 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 13:55:00.106612   25306 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1014 13:55:00.106709   25306 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1014 13:55:00.106793   25306 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1014 13:55:00.106864   25306 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1014 13:55:00.107022   25306 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-450021 localhost] and IPs [192.168.39.176 127.0.0.1 ::1]
	I1014 13:55:00.107089   25306 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1014 13:55:00.107238   25306 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-450021 localhost] and IPs [192.168.39.176 127.0.0.1 ::1]
	I1014 13:55:00.107331   25306 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 13:55:00.107416   25306 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 13:55:00.107496   25306 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1014 13:55:00.107576   25306 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 13:55:00.107656   25306 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 13:55:00.107736   25306 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 13:55:00.107811   25306 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 13:55:00.107905   25306 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 13:55:00.107957   25306 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 13:55:00.108061   25306 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 13:55:00.108162   25306 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 13:55:00.109922   25306 out.go:235]   - Booting up control plane ...
	I1014 13:55:00.110034   25306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 13:55:00.110132   25306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 13:55:00.110214   25306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 13:55:00.110345   25306 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 13:55:00.110449   25306 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 13:55:00.110494   25306 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 13:55:00.110622   25306 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 13:55:00.110705   25306 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 13:55:00.110755   25306 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002174478s
	I1014 13:55:00.110843   25306 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1014 13:55:00.110911   25306 kubeadm.go:310] [api-check] The API server is healthy after 5.813875513s
	I1014 13:55:00.111034   25306 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 13:55:00.111171   25306 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 13:55:00.111231   25306 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 13:55:00.111391   25306 kubeadm.go:310] [mark-control-plane] Marking the node ha-450021 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 13:55:00.111441   25306 kubeadm.go:310] [bootstrap-token] Using token: e8eaxr.5trfuyfb27hv7e11
	I1014 13:55:00.112896   25306 out.go:235]   - Configuring RBAC rules ...
	I1014 13:55:00.113020   25306 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 13:55:00.113086   25306 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 13:55:00.113219   25306 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 13:55:00.113369   25306 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 13:55:00.113527   25306 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 13:55:00.113646   25306 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 13:55:00.113778   25306 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 13:55:00.113819   25306 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1014 13:55:00.113862   25306 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1014 13:55:00.113868   25306 kubeadm.go:310] 
	I1014 13:55:00.113922   25306 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1014 13:55:00.113928   25306 kubeadm.go:310] 
	I1014 13:55:00.113997   25306 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1014 13:55:00.114004   25306 kubeadm.go:310] 
	I1014 13:55:00.114048   25306 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1014 13:55:00.114129   25306 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 13:55:00.114180   25306 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 13:55:00.114188   25306 kubeadm.go:310] 
	I1014 13:55:00.114245   25306 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1014 13:55:00.114263   25306 kubeadm.go:310] 
	I1014 13:55:00.114330   25306 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 13:55:00.114341   25306 kubeadm.go:310] 
	I1014 13:55:00.114411   25306 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1014 13:55:00.114513   25306 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 13:55:00.114572   25306 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 13:55:00.114578   25306 kubeadm.go:310] 
	I1014 13:55:00.114693   25306 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 13:55:00.114784   25306 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1014 13:55:00.114793   25306 kubeadm.go:310] 
	I1014 13:55:00.114891   25306 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token e8eaxr.5trfuyfb27hv7e11 \
	I1014 13:55:00.114977   25306 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 \
	I1014 13:55:00.114998   25306 kubeadm.go:310] 	--control-plane 
	I1014 13:55:00.115002   25306 kubeadm.go:310] 
	I1014 13:55:00.115074   25306 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1014 13:55:00.115080   25306 kubeadm.go:310] 
	I1014 13:55:00.115154   25306 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token e8eaxr.5trfuyfb27hv7e11 \
	I1014 13:55:00.115275   25306 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 
	I1014 13:55:00.115292   25306 cni.go:84] Creating CNI manager for ""
	I1014 13:55:00.115302   25306 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 13:55:00.117091   25306 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1014 13:55:00.118483   25306 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1014 13:55:00.124368   25306 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1014 13:55:00.124388   25306 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1014 13:55:00.145958   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1014 13:55:00.528887   25306 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 13:55:00.528967   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:55:00.528987   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-450021 minikube.k8s.io/updated_at=2024_10_14T13_55_00_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=ha-450021 minikube.k8s.io/primary=true
	I1014 13:55:00.543744   25306 ops.go:34] apiserver oom_adj: -16
	I1014 13:55:00.662237   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:55:01.162275   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:55:01.662698   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:55:02.163027   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:55:02.662525   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:55:03.162972   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:55:03.662524   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:55:03.751160   25306 kubeadm.go:1113] duration metric: took 3.222260966s to wait for elevateKubeSystemPrivileges
	I1014 13:55:03.751200   25306 kubeadm.go:394] duration metric: took 14.945849765s to StartCluster
	I1014 13:55:03.751222   25306 settings.go:142] acquiring lock: {Name:mk9f6f6b9dc8c3435472077cbb9091b8e648d1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:55:03.751304   25306 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 13:55:03.752000   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/kubeconfig: {Name:mk17dd47b52fd7a8ee17f563c35b22ef1b7788f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:55:03.752256   25306 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 13:55:03.752277   25306 start.go:241] waiting for startup goroutines ...
	I1014 13:55:03.752262   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1014 13:55:03.752277   25306 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 13:55:03.752370   25306 addons.go:69] Setting storage-provisioner=true in profile "ha-450021"
	I1014 13:55:03.752388   25306 addons.go:234] Setting addon storage-provisioner=true in "ha-450021"
	I1014 13:55:03.752407   25306 addons.go:69] Setting default-storageclass=true in profile "ha-450021"
	I1014 13:55:03.752422   25306 host.go:66] Checking if "ha-450021" exists ...
	I1014 13:55:03.752435   25306 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:55:03.752440   25306 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-450021"
	I1014 13:55:03.752851   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:55:03.752853   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:55:03.752892   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:55:03.752907   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:55:03.768120   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40745
	I1014 13:55:03.768294   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36817
	I1014 13:55:03.768559   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:55:03.768773   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:55:03.769132   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:55:03.769156   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:55:03.769285   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:55:03.769308   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:55:03.769488   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:55:03.769594   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:55:03.769745   25306 main.go:141] libmachine: (ha-450021) Calling .GetState
	I1014 13:55:03.770040   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:55:03.770082   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:55:03.771657   25306 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 13:55:03.771868   25306 kapi.go:59] client config for ha-450021: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.crt", KeyFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.key", CAFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432aa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 13:55:03.772274   25306 cert_rotation.go:140] Starting client certificate rotation controller
	I1014 13:55:03.772426   25306 addons.go:234] Setting addon default-storageclass=true in "ha-450021"
	I1014 13:55:03.772458   25306 host.go:66] Checking if "ha-450021" exists ...
	I1014 13:55:03.772689   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:55:03.772720   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:55:03.785301   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39049
	I1014 13:55:03.785754   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:55:03.786274   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:55:03.786301   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:55:03.786653   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:55:03.786685   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37795
	I1014 13:55:03.786852   25306 main.go:141] libmachine: (ha-450021) Calling .GetState
	I1014 13:55:03.787134   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:55:03.787596   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:55:03.787621   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:55:03.787924   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:55:03.788463   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:55:03.788507   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:55:03.788527   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:55:03.790666   25306 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 13:55:03.791877   25306 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 13:55:03.791892   25306 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 13:55:03.791905   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:55:03.794484   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:55:03.794853   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:55:03.794881   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:55:03.794998   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:55:03.795150   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:55:03.795298   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:55:03.795425   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 13:55:03.804082   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36387
	I1014 13:55:03.804475   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:55:03.804871   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:55:03.804893   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:55:03.805154   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:55:03.805296   25306 main.go:141] libmachine: (ha-450021) Calling .GetState
	I1014 13:55:03.806617   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:55:03.806811   25306 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 13:55:03.806824   25306 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 13:55:03.806838   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:55:03.809334   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:55:03.809735   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:55:03.809764   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:55:03.809917   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:55:03.810083   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:55:03.810214   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:55:03.810346   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 13:55:03.916382   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1014 13:55:03.970762   25306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 13:55:04.045876   25306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 13:55:04.562851   25306 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1014 13:55:04.828250   25306 main.go:141] libmachine: Making call to close driver server
	I1014 13:55:04.828267   25306 main.go:141] libmachine: Making call to close driver server
	I1014 13:55:04.828285   25306 main.go:141] libmachine: (ha-450021) Calling .Close
	I1014 13:55:04.828272   25306 main.go:141] libmachine: (ha-450021) Calling .Close
	I1014 13:55:04.828566   25306 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:55:04.828578   25306 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:55:04.828586   25306 main.go:141] libmachine: Making call to close driver server
	I1014 13:55:04.828592   25306 main.go:141] libmachine: (ha-450021) Calling .Close
	I1014 13:55:04.828628   25306 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:55:04.828642   25306 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:55:04.828650   25306 main.go:141] libmachine: Making call to close driver server
	I1014 13:55:04.828657   25306 main.go:141] libmachine: (ha-450021) Calling .Close
	I1014 13:55:04.828760   25306 main.go:141] libmachine: (ha-450021) DBG | Closing plugin on server side
	I1014 13:55:04.828781   25306 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:55:04.828790   25306 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:55:04.830286   25306 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:55:04.830303   25306 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:55:04.830318   25306 main.go:141] libmachine: (ha-450021) DBG | Closing plugin on server side
	I1014 13:55:04.830357   25306 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 13:55:04.830377   25306 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 13:55:04.830467   25306 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1014 13:55:04.830477   25306 round_trippers.go:469] Request Headers:
	I1014 13:55:04.830487   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:55:04.830500   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:55:04.851944   25306 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I1014 13:55:04.852525   25306 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1014 13:55:04.852541   25306 round_trippers.go:469] Request Headers:
	I1014 13:55:04.852549   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:55:04.852558   25306 round_trippers.go:473]     Content-Type: application/json
	I1014 13:55:04.852569   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:55:04.860873   25306 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1014 13:55:04.863865   25306 main.go:141] libmachine: Making call to close driver server
	I1014 13:55:04.863890   25306 main.go:141] libmachine: (ha-450021) Calling .Close
	I1014 13:55:04.864194   25306 main.go:141] libmachine: (ha-450021) DBG | Closing plugin on server side
	I1014 13:55:04.864235   25306 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:55:04.864246   25306 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:55:04.865910   25306 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1014 13:55:04.867207   25306 addons.go:510] duration metric: took 1.114927542s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1014 13:55:04.867245   25306 start.go:246] waiting for cluster config update ...
	I1014 13:55:04.867260   25306 start.go:255] writing updated cluster config ...
	I1014 13:55:04.868981   25306 out.go:201] 
	I1014 13:55:04.870358   25306 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:55:04.870432   25306 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/config.json ...
	I1014 13:55:04.871998   25306 out.go:177] * Starting "ha-450021-m02" control-plane node in "ha-450021" cluster
	I1014 13:55:04.873148   25306 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 13:55:04.873168   25306 cache.go:56] Caching tarball of preloaded images
	I1014 13:55:04.873259   25306 preload.go:172] Found /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 13:55:04.873270   25306 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1014 13:55:04.873348   25306 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/config.json ...
	I1014 13:55:04.873725   25306 start.go:360] acquireMachinesLock for ha-450021-m02: {Name:mk0b928e3a7c606d1470b2064477a22c5e59969d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 13:55:04.873773   25306 start.go:364] duration metric: took 27.606µs to acquireMachinesLock for "ha-450021-m02"
	I1014 13:55:04.873797   25306 start.go:93] Provisioning new machine with config: &{Name:ha-450021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 13:55:04.873856   25306 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1014 13:55:04.875450   25306 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 13:55:04.875534   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:55:04.875571   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:55:04.891858   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40523
	I1014 13:55:04.892468   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:55:04.893080   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:55:04.893101   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:55:04.893416   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:55:04.893639   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetMachineName
	I1014 13:55:04.893812   25306 main.go:141] libmachine: (ha-450021-m02) Calling .DriverName
	I1014 13:55:04.894009   25306 start.go:159] libmachine.API.Create for "ha-450021" (driver="kvm2")
	I1014 13:55:04.894037   25306 client.go:168] LocalClient.Create starting
	I1014 13:55:04.894069   25306 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem
	I1014 13:55:04.894114   25306 main.go:141] libmachine: Decoding PEM data...
	I1014 13:55:04.894134   25306 main.go:141] libmachine: Parsing certificate...
	I1014 13:55:04.894211   25306 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem
	I1014 13:55:04.894240   25306 main.go:141] libmachine: Decoding PEM data...
	I1014 13:55:04.894258   25306 main.go:141] libmachine: Parsing certificate...
	I1014 13:55:04.894285   25306 main.go:141] libmachine: Running pre-create checks...
	I1014 13:55:04.894306   25306 main.go:141] libmachine: (ha-450021-m02) Calling .PreCreateCheck
	I1014 13:55:04.894485   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetConfigRaw
	I1014 13:55:04.894889   25306 main.go:141] libmachine: Creating machine...
	I1014 13:55:04.894903   25306 main.go:141] libmachine: (ha-450021-m02) Calling .Create
	I1014 13:55:04.895072   25306 main.go:141] libmachine: (ha-450021-m02) Creating KVM machine...
	I1014 13:55:04.896272   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found existing default KVM network
	I1014 13:55:04.896429   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found existing private KVM network mk-ha-450021
	I1014 13:55:04.896566   25306 main.go:141] libmachine: (ha-450021-m02) Setting up store path in /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02 ...
	I1014 13:55:04.896592   25306 main.go:141] libmachine: (ha-450021-m02) Building disk image from file:///home/jenkins/minikube-integration/19790-7836/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1014 13:55:04.896679   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:04.896574   25672 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 13:55:04.896767   25306 main.go:141] libmachine: (ha-450021-m02) Downloading /home/jenkins/minikube-integration/19790-7836/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19790-7836/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1014 13:55:05.156236   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:05.156095   25672 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/id_rsa...
	I1014 13:55:05.229289   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:05.229176   25672 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/ha-450021-m02.rawdisk...
	I1014 13:55:05.229317   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Writing magic tar header
	I1014 13:55:05.229327   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Writing SSH key tar header
	I1014 13:55:05.229334   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:05.229291   25672 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02 ...
	I1014 13:55:05.229448   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02
	I1014 13:55:05.229476   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube/machines
	I1014 13:55:05.229494   25306 main.go:141] libmachine: (ha-450021-m02) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02 (perms=drwx------)
	I1014 13:55:05.229512   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 13:55:05.229525   25306 main.go:141] libmachine: (ha-450021-m02) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube/machines (perms=drwxr-xr-x)
	I1014 13:55:05.229536   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836
	I1014 13:55:05.229551   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1014 13:55:05.229562   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Checking permissions on dir: /home/jenkins
	I1014 13:55:05.229576   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Checking permissions on dir: /home
	I1014 13:55:05.229584   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Skipping /home - not owner
	I1014 13:55:05.229634   25306 main.go:141] libmachine: (ha-450021-m02) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube (perms=drwxr-xr-x)
	I1014 13:55:05.229673   25306 main.go:141] libmachine: (ha-450021-m02) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836 (perms=drwxrwxr-x)
	I1014 13:55:05.229699   25306 main.go:141] libmachine: (ha-450021-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1014 13:55:05.229714   25306 main.go:141] libmachine: (ha-450021-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1014 13:55:05.229724   25306 main.go:141] libmachine: (ha-450021-m02) Creating domain...
	I1014 13:55:05.230559   25306 main.go:141] libmachine: (ha-450021-m02) define libvirt domain using xml: 
	I1014 13:55:05.230582   25306 main.go:141] libmachine: (ha-450021-m02) <domain type='kvm'>
	I1014 13:55:05.230608   25306 main.go:141] libmachine: (ha-450021-m02)   <name>ha-450021-m02</name>
	I1014 13:55:05.230626   25306 main.go:141] libmachine: (ha-450021-m02)   <memory unit='MiB'>2200</memory>
	I1014 13:55:05.230636   25306 main.go:141] libmachine: (ha-450021-m02)   <vcpu>2</vcpu>
	I1014 13:55:05.230650   25306 main.go:141] libmachine: (ha-450021-m02)   <features>
	I1014 13:55:05.230660   25306 main.go:141] libmachine: (ha-450021-m02)     <acpi/>
	I1014 13:55:05.230666   25306 main.go:141] libmachine: (ha-450021-m02)     <apic/>
	I1014 13:55:05.230676   25306 main.go:141] libmachine: (ha-450021-m02)     <pae/>
	I1014 13:55:05.230682   25306 main.go:141] libmachine: (ha-450021-m02)     
	I1014 13:55:05.230689   25306 main.go:141] libmachine: (ha-450021-m02)   </features>
	I1014 13:55:05.230699   25306 main.go:141] libmachine: (ha-450021-m02)   <cpu mode='host-passthrough'>
	I1014 13:55:05.230706   25306 main.go:141] libmachine: (ha-450021-m02)   
	I1014 13:55:05.230711   25306 main.go:141] libmachine: (ha-450021-m02)   </cpu>
	I1014 13:55:05.230718   25306 main.go:141] libmachine: (ha-450021-m02)   <os>
	I1014 13:55:05.230728   25306 main.go:141] libmachine: (ha-450021-m02)     <type>hvm</type>
	I1014 13:55:05.230739   25306 main.go:141] libmachine: (ha-450021-m02)     <boot dev='cdrom'/>
	I1014 13:55:05.230748   25306 main.go:141] libmachine: (ha-450021-m02)     <boot dev='hd'/>
	I1014 13:55:05.230763   25306 main.go:141] libmachine: (ha-450021-m02)     <bootmenu enable='no'/>
	I1014 13:55:05.230773   25306 main.go:141] libmachine: (ha-450021-m02)   </os>
	I1014 13:55:05.230780   25306 main.go:141] libmachine: (ha-450021-m02)   <devices>
	I1014 13:55:05.230790   25306 main.go:141] libmachine: (ha-450021-m02)     <disk type='file' device='cdrom'>
	I1014 13:55:05.230819   25306 main.go:141] libmachine: (ha-450021-m02)       <source file='/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/boot2docker.iso'/>
	I1014 13:55:05.230839   25306 main.go:141] libmachine: (ha-450021-m02)       <target dev='hdc' bus='scsi'/>
	I1014 13:55:05.230847   25306 main.go:141] libmachine: (ha-450021-m02)       <readonly/>
	I1014 13:55:05.230854   25306 main.go:141] libmachine: (ha-450021-m02)     </disk>
	I1014 13:55:05.230864   25306 main.go:141] libmachine: (ha-450021-m02)     <disk type='file' device='disk'>
	I1014 13:55:05.230881   25306 main.go:141] libmachine: (ha-450021-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1014 13:55:05.230897   25306 main.go:141] libmachine: (ha-450021-m02)       <source file='/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/ha-450021-m02.rawdisk'/>
	I1014 13:55:05.230912   25306 main.go:141] libmachine: (ha-450021-m02)       <target dev='hda' bus='virtio'/>
	I1014 13:55:05.230923   25306 main.go:141] libmachine: (ha-450021-m02)     </disk>
	I1014 13:55:05.230933   25306 main.go:141] libmachine: (ha-450021-m02)     <interface type='network'>
	I1014 13:55:05.230942   25306 main.go:141] libmachine: (ha-450021-m02)       <source network='mk-ha-450021'/>
	I1014 13:55:05.230949   25306 main.go:141] libmachine: (ha-450021-m02)       <model type='virtio'/>
	I1014 13:55:05.230956   25306 main.go:141] libmachine: (ha-450021-m02)     </interface>
	I1014 13:55:05.230966   25306 main.go:141] libmachine: (ha-450021-m02)     <interface type='network'>
	I1014 13:55:05.230975   25306 main.go:141] libmachine: (ha-450021-m02)       <source network='default'/>
	I1014 13:55:05.230987   25306 main.go:141] libmachine: (ha-450021-m02)       <model type='virtio'/>
	I1014 13:55:05.230998   25306 main.go:141] libmachine: (ha-450021-m02)     </interface>
	I1014 13:55:05.231008   25306 main.go:141] libmachine: (ha-450021-m02)     <serial type='pty'>
	I1014 13:55:05.231016   25306 main.go:141] libmachine: (ha-450021-m02)       <target port='0'/>
	I1014 13:55:05.231026   25306 main.go:141] libmachine: (ha-450021-m02)     </serial>
	I1014 13:55:05.231034   25306 main.go:141] libmachine: (ha-450021-m02)     <console type='pty'>
	I1014 13:55:05.231042   25306 main.go:141] libmachine: (ha-450021-m02)       <target type='serial' port='0'/>
	I1014 13:55:05.231047   25306 main.go:141] libmachine: (ha-450021-m02)     </console>
	I1014 13:55:05.231060   25306 main.go:141] libmachine: (ha-450021-m02)     <rng model='virtio'>
	I1014 13:55:05.231073   25306 main.go:141] libmachine: (ha-450021-m02)       <backend model='random'>/dev/random</backend>
	I1014 13:55:05.231079   25306 main.go:141] libmachine: (ha-450021-m02)     </rng>
	I1014 13:55:05.231090   25306 main.go:141] libmachine: (ha-450021-m02)     
	I1014 13:55:05.231096   25306 main.go:141] libmachine: (ha-450021-m02)     
	I1014 13:55:05.231107   25306 main.go:141] libmachine: (ha-450021-m02)   </devices>
	I1014 13:55:05.231116   25306 main.go:141] libmachine: (ha-450021-m02) </domain>
	I1014 13:55:05.231125   25306 main.go:141] libmachine: (ha-450021-m02) 
	I1014 13:55:05.238505   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:39:fb:46 in network default
	I1014 13:55:05.239084   25306 main.go:141] libmachine: (ha-450021-m02) Ensuring networks are active...
	I1014 13:55:05.239109   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:05.239788   25306 main.go:141] libmachine: (ha-450021-m02) Ensuring network default is active
	I1014 13:55:05.240113   25306 main.go:141] libmachine: (ha-450021-m02) Ensuring network mk-ha-450021 is active
	I1014 13:55:05.240488   25306 main.go:141] libmachine: (ha-450021-m02) Getting domain xml...
	I1014 13:55:05.241224   25306 main.go:141] libmachine: (ha-450021-m02) Creating domain...
	I1014 13:55:06.508569   25306 main.go:141] libmachine: (ha-450021-m02) Waiting to get IP...
	I1014 13:55:06.509274   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:06.509728   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:06.509800   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:06.509721   25672 retry.go:31] will retry after 253.994001ms: waiting for machine to come up
	I1014 13:55:06.765296   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:06.765720   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:06.765754   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:06.765695   25672 retry.go:31] will retry after 330.390593ms: waiting for machine to come up
	I1014 13:55:07.097342   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:07.097779   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:07.097809   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:07.097725   25672 retry.go:31] will retry after 315.743674ms: waiting for machine to come up
	I1014 13:55:07.414954   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:07.415551   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:07.415596   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:07.415518   25672 retry.go:31] will retry after 505.396104ms: waiting for machine to come up
	I1014 13:55:07.922086   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:07.922530   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:07.922555   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:07.922518   25672 retry.go:31] will retry after 762.026701ms: waiting for machine to come up
	I1014 13:55:08.686471   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:08.686874   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:08.686903   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:08.686842   25672 retry.go:31] will retry after 891.989591ms: waiting for machine to come up
	I1014 13:55:09.580677   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:09.581174   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:09.581195   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:09.581150   25672 retry.go:31] will retry after 716.006459ms: waiting for machine to come up
	I1014 13:55:10.299036   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:10.299435   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:10.299462   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:10.299390   25672 retry.go:31] will retry after 999.038321ms: waiting for machine to come up
	I1014 13:55:11.299678   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:11.300155   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:11.300182   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:11.300092   25672 retry.go:31] will retry after 1.384319167s: waiting for machine to come up
	I1014 13:55:12.686664   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:12.687084   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:12.687130   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:12.687031   25672 retry.go:31] will retry after 1.750600606s: waiting for machine to come up
	I1014 13:55:14.439721   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:14.440157   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:14.440185   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:14.440132   25672 retry.go:31] will retry after 2.719291498s: waiting for machine to come up
	I1014 13:55:17.160916   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:17.161338   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:17.161359   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:17.161288   25672 retry.go:31] will retry after 2.934487947s: waiting for machine to come up
	I1014 13:55:20.097623   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:20.098033   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:20.098054   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:20.097994   25672 retry.go:31] will retry after 3.495468914s: waiting for machine to come up
	I1014 13:55:23.597556   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:23.598084   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:23.598105   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:23.598043   25672 retry.go:31] will retry after 4.955902252s: waiting for machine to come up
	I1014 13:55:28.555767   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:28.556335   25306 main.go:141] libmachine: (ha-450021-m02) Found IP for machine: 192.168.39.89
	I1014 13:55:28.556360   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has current primary IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:28.556369   25306 main.go:141] libmachine: (ha-450021-m02) Reserving static IP address...
	I1014 13:55:28.556652   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find host DHCP lease matching {name: "ha-450021-m02", mac: "52:54:00:51:58:78", ip: "192.168.39.89"} in network mk-ha-450021
	I1014 13:55:28.627598   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Getting to WaitForSSH function...
	I1014 13:55:28.627633   25306 main.go:141] libmachine: (ha-450021-m02) Reserved static IP address: 192.168.39.89
	I1014 13:55:28.627646   25306 main.go:141] libmachine: (ha-450021-m02) Waiting for SSH to be available...
	I1014 13:55:28.629843   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:28.630161   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021
	I1014 13:55:28.630190   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find defined IP address of network mk-ha-450021 interface with MAC address 52:54:00:51:58:78
	I1014 13:55:28.630310   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Using SSH client type: external
	I1014 13:55:28.630337   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/id_rsa (-rw-------)
	I1014 13:55:28.630368   25306 main.go:141] libmachine: (ha-450021-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 13:55:28.630381   25306 main.go:141] libmachine: (ha-450021-m02) DBG | About to run SSH command:
	I1014 13:55:28.630396   25306 main.go:141] libmachine: (ha-450021-m02) DBG | exit 0
	I1014 13:55:28.634134   25306 main.go:141] libmachine: (ha-450021-m02) DBG | SSH cmd err, output: exit status 255: 
	I1014 13:55:28.634150   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1014 13:55:28.634157   25306 main.go:141] libmachine: (ha-450021-m02) DBG | command : exit 0
	I1014 13:55:28.634162   25306 main.go:141] libmachine: (ha-450021-m02) DBG | err     : exit status 255
	I1014 13:55:28.634170   25306 main.go:141] libmachine: (ha-450021-m02) DBG | output  : 
	I1014 13:55:31.634385   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Getting to WaitForSSH function...
	I1014 13:55:31.636814   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:31.637121   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:31.637150   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:31.637249   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Using SSH client type: external
	I1014 13:55:31.637272   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/id_rsa (-rw-------)
	I1014 13:55:31.637290   25306 main.go:141] libmachine: (ha-450021-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 13:55:31.637302   25306 main.go:141] libmachine: (ha-450021-m02) DBG | About to run SSH command:
	I1014 13:55:31.637327   25306 main.go:141] libmachine: (ha-450021-m02) DBG | exit 0
	I1014 13:55:31.762693   25306 main.go:141] libmachine: (ha-450021-m02) DBG | SSH cmd err, output: <nil>: 
	I1014 13:55:31.762993   25306 main.go:141] libmachine: (ha-450021-m02) KVM machine creation complete!
	I1014 13:55:31.763308   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetConfigRaw
	I1014 13:55:31.763786   25306 main.go:141] libmachine: (ha-450021-m02) Calling .DriverName
	I1014 13:55:31.763969   25306 main.go:141] libmachine: (ha-450021-m02) Calling .DriverName
	I1014 13:55:31.764130   25306 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1014 13:55:31.764154   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetState
	I1014 13:55:31.765484   25306 main.go:141] libmachine: Detecting operating system of created instance...
	I1014 13:55:31.765498   25306 main.go:141] libmachine: Waiting for SSH to be available...
	I1014 13:55:31.765506   25306 main.go:141] libmachine: Getting to WaitForSSH function...
	I1014 13:55:31.765513   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	I1014 13:55:31.767968   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:31.768352   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:31.768386   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:31.768540   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHPort
	I1014 13:55:31.768701   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:31.768883   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:31.769050   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHUsername
	I1014 13:55:31.769231   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:55:31.769460   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1014 13:55:31.769474   25306 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1014 13:55:31.877746   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 13:55:31.877770   25306 main.go:141] libmachine: Detecting the provisioner...
	I1014 13:55:31.877779   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	I1014 13:55:31.880489   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:31.880858   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:31.880884   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:31.881034   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHPort
	I1014 13:55:31.881200   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:31.881337   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:31.881482   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHUsername
	I1014 13:55:31.881602   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:55:31.881767   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1014 13:55:31.881780   25306 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1014 13:55:31.995447   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1014 13:55:31.995515   25306 main.go:141] libmachine: found compatible host: buildroot
	I1014 13:55:31.995529   25306 main.go:141] libmachine: Provisioning with buildroot...
	I1014 13:55:31.995541   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetMachineName
	I1014 13:55:31.995787   25306 buildroot.go:166] provisioning hostname "ha-450021-m02"
	I1014 13:55:31.995817   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetMachineName
	I1014 13:55:31.995999   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	I1014 13:55:31.998434   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:31.998820   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:31.998841   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:31.998986   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHPort
	I1014 13:55:31.999184   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:31.999375   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:31.999496   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHUsername
	I1014 13:55:31.999675   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:55:31.999836   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1014 13:55:31.999847   25306 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-450021-m02 && echo "ha-450021-m02" | sudo tee /etc/hostname
	I1014 13:55:32.125055   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-450021-m02
	
	I1014 13:55:32.125093   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	I1014 13:55:32.128764   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.129158   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:32.129191   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.129369   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHPort
	I1014 13:55:32.129548   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:32.129704   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:32.129831   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHUsername
	I1014 13:55:32.129997   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:55:32.130195   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1014 13:55:32.130212   25306 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-450021-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-450021-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-450021-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 13:55:32.251676   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 13:55:32.251705   25306 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 13:55:32.251731   25306 buildroot.go:174] setting up certificates
	I1014 13:55:32.251744   25306 provision.go:84] configureAuth start
	I1014 13:55:32.251763   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetMachineName
	I1014 13:55:32.252028   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetIP
	I1014 13:55:32.254513   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.254862   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:32.254887   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.255045   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	I1014 13:55:32.257083   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.257408   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:32.257435   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.257565   25306 provision.go:143] copyHostCerts
	I1014 13:55:32.257592   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 13:55:32.257618   25306 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 13:55:32.257629   25306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 13:55:32.257712   25306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 13:55:32.257797   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 13:55:32.257821   25306 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 13:55:32.257831   25306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 13:55:32.257870   25306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 13:55:32.257928   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 13:55:32.257951   25306 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 13:55:32.257959   25306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 13:55:32.257986   25306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 13:55:32.258053   25306 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.ha-450021-m02 san=[127.0.0.1 192.168.39.89 ha-450021-m02 localhost minikube]
	I1014 13:55:32.418210   25306 provision.go:177] copyRemoteCerts
	I1014 13:55:32.418267   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 13:55:32.418287   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	I1014 13:55:32.421033   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.421356   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:32.421387   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.421587   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHPort
	I1014 13:55:32.421794   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:32.421949   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHUsername
	I1014 13:55:32.422067   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/id_rsa Username:docker}
	I1014 13:55:32.508850   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 13:55:32.508917   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 13:55:32.534047   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 13:55:32.534120   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1014 13:55:32.558263   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 13:55:32.558335   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 13:55:32.582102   25306 provision.go:87] duration metric: took 330.342541ms to configureAuth
	I1014 13:55:32.582134   25306 buildroot.go:189] setting minikube options for container-runtime
	I1014 13:55:32.582301   25306 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:55:32.582371   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	I1014 13:55:32.584832   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.585166   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:32.585192   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.585349   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHPort
	I1014 13:55:32.585528   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:32.585644   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:32.585802   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHUsername
	I1014 13:55:32.585929   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:55:32.586092   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1014 13:55:32.586111   25306 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 13:55:32.822330   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 13:55:32.822358   25306 main.go:141] libmachine: Checking connection to Docker...
	I1014 13:55:32.822366   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetURL
	I1014 13:55:32.823614   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Using libvirt version 6000000
	I1014 13:55:32.826190   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.826546   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:32.826567   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.826737   25306 main.go:141] libmachine: Docker is up and running!
	I1014 13:55:32.826754   25306 main.go:141] libmachine: Reticulating splines...
	I1014 13:55:32.826772   25306 client.go:171] duration metric: took 27.932717671s to LocalClient.Create
	I1014 13:55:32.826803   25306 start.go:167] duration metric: took 27.93279451s to libmachine.API.Create "ha-450021"
	I1014 13:55:32.826815   25306 start.go:293] postStartSetup for "ha-450021-m02" (driver="kvm2")
	I1014 13:55:32.826825   25306 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 13:55:32.826846   25306 main.go:141] libmachine: (ha-450021-m02) Calling .DriverName
	I1014 13:55:32.827073   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 13:55:32.827097   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	I1014 13:55:32.829440   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.829745   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:32.829785   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.829885   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHPort
	I1014 13:55:32.830054   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:32.830208   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHUsername
	I1014 13:55:32.830348   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/id_rsa Username:docker}
	I1014 13:55:32.918434   25306 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 13:55:32.922919   25306 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 13:55:32.922947   25306 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 13:55:32.923010   25306 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 13:55:32.923092   25306 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 13:55:32.923101   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> /etc/ssl/certs/150232.pem
	I1014 13:55:32.923187   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 13:55:32.933129   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 13:55:32.957819   25306 start.go:296] duration metric: took 130.989484ms for postStartSetup
	I1014 13:55:32.957871   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetConfigRaw
	I1014 13:55:32.958438   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetIP
	I1014 13:55:32.961024   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.961393   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:32.961425   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.961630   25306 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/config.json ...
	I1014 13:55:32.961835   25306 start.go:128] duration metric: took 28.087968814s to createHost
	I1014 13:55:32.961858   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	I1014 13:55:32.964121   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.964493   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:32.964528   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.964702   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHPort
	I1014 13:55:32.964854   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:32.964966   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:32.965109   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHUsername
	I1014 13:55:32.965227   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:55:32.965432   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1014 13:55:32.965446   25306 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 13:55:33.079362   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728914133.060490571
	
	I1014 13:55:33.079386   25306 fix.go:216] guest clock: 1728914133.060490571
	I1014 13:55:33.079405   25306 fix.go:229] Guest: 2024-10-14 13:55:33.060490571 +0000 UTC Remote: 2024-10-14 13:55:32.961847349 +0000 UTC m=+73.185560400 (delta=98.643222ms)
	I1014 13:55:33.079425   25306 fix.go:200] guest clock delta is within tolerance: 98.643222ms
	I1014 13:55:33.079431   25306 start.go:83] releasing machines lock for "ha-450021-m02", held for 28.205646747s
	I1014 13:55:33.079452   25306 main.go:141] libmachine: (ha-450021-m02) Calling .DriverName
	I1014 13:55:33.079689   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetIP
	I1014 13:55:33.082245   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:33.082619   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:33.082645   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:33.085035   25306 out.go:177] * Found network options:
	I1014 13:55:33.086426   25306 out.go:177]   - NO_PROXY=192.168.39.176
	W1014 13:55:33.087574   25306 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 13:55:33.087613   25306 main.go:141] libmachine: (ha-450021-m02) Calling .DriverName
	I1014 13:55:33.088138   25306 main.go:141] libmachine: (ha-450021-m02) Calling .DriverName
	I1014 13:55:33.088304   25306 main.go:141] libmachine: (ha-450021-m02) Calling .DriverName
	I1014 13:55:33.088401   25306 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 13:55:33.088445   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	W1014 13:55:33.088467   25306 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 13:55:33.088536   25306 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 13:55:33.088557   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	I1014 13:55:33.091084   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:33.091105   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:33.091497   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:33.091525   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:33.091546   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:33.091562   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:33.091675   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHPort
	I1014 13:55:33.091813   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHPort
	I1014 13:55:33.091867   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:33.091959   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:33.092027   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHUsername
	I1014 13:55:33.092088   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHUsername
	I1014 13:55:33.092156   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/id_rsa Username:docker}
	I1014 13:55:33.092203   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/id_rsa Username:docker}
	I1014 13:55:33.324240   25306 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 13:55:33.330527   25306 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 13:55:33.330586   25306 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 13:55:33.345640   25306 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 13:55:33.345657   25306 start.go:495] detecting cgroup driver to use...
	I1014 13:55:33.345701   25306 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 13:55:33.361741   25306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 13:55:33.375019   25306 docker.go:217] disabling cri-docker service (if available) ...
	I1014 13:55:33.375071   25306 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 13:55:33.388301   25306 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 13:55:33.401227   25306 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 13:55:33.511329   25306 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 13:55:33.658848   25306 docker.go:233] disabling docker service ...
	I1014 13:55:33.658913   25306 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 13:55:33.673279   25306 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 13:55:33.685917   25306 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 13:55:33.818316   25306 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 13:55:33.936222   25306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 13:55:33.950467   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 13:55:33.970208   25306 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 13:55:33.970265   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:55:33.984110   25306 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 13:55:33.984169   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:55:33.995549   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:55:34.006565   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:55:34.018479   25306 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 13:55:34.030013   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:55:34.041645   25306 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:55:34.059707   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:55:34.070442   25306 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 13:55:34.080309   25306 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 13:55:34.080366   25306 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 13:55:34.093735   25306 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 13:55:34.103445   25306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:55:34.215901   25306 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 13:55:34.308754   25306 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 13:55:34.308820   25306 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 13:55:34.313625   25306 start.go:563] Will wait 60s for crictl version
	I1014 13:55:34.313676   25306 ssh_runner.go:195] Run: which crictl
	I1014 13:55:34.317635   25306 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 13:55:34.356534   25306 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 13:55:34.356604   25306 ssh_runner.go:195] Run: crio --version
	I1014 13:55:34.384187   25306 ssh_runner.go:195] Run: crio --version
	I1014 13:55:34.414404   25306 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1014 13:55:34.415699   25306 out.go:177]   - env NO_PROXY=192.168.39.176
	I1014 13:55:34.416965   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetIP
	I1014 13:55:34.419296   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:34.419601   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:34.419628   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:34.419811   25306 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1014 13:55:34.423754   25306 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 13:55:34.435980   25306 mustload.go:65] Loading cluster: ha-450021
	I1014 13:55:34.436151   25306 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:55:34.436381   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:55:34.436419   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:55:34.450826   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35637
	I1014 13:55:34.451213   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:55:34.451655   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:55:34.451677   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:55:34.451944   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:55:34.452123   25306 main.go:141] libmachine: (ha-450021) Calling .GetState
	I1014 13:55:34.453521   25306 host.go:66] Checking if "ha-450021" exists ...
	I1014 13:55:34.453781   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:55:34.453811   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:55:34.467708   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35033
	I1014 13:55:34.468144   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:55:34.468583   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:55:34.468597   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:55:34.468863   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:55:34.469023   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:55:34.469168   25306 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021 for IP: 192.168.39.89
	I1014 13:55:34.469180   25306 certs.go:194] generating shared ca certs ...
	I1014 13:55:34.469197   25306 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:55:34.469314   25306 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 13:55:34.469365   25306 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 13:55:34.469378   25306 certs.go:256] generating profile certs ...
	I1014 13:55:34.469463   25306 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.key
	I1014 13:55:34.469494   25306 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.ffb9c796
	I1014 13:55:34.469515   25306 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.ffb9c796 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.176 192.168.39.89 192.168.39.254]
	I1014 13:55:34.810302   25306 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.ffb9c796 ...
	I1014 13:55:34.810336   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.ffb9c796: {Name:mk62309e383c07d7599f8a1200bdc69462a2d14a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:55:34.810530   25306 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.ffb9c796 ...
	I1014 13:55:34.810549   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.ffb9c796: {Name:mkf013e40a46367f5d473382a243ff918ed6f0f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:55:34.810679   25306 certs.go:381] copying /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.ffb9c796 -> /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt
	I1014 13:55:34.810843   25306 certs.go:385] copying /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.ffb9c796 -> /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key
	I1014 13:55:34.811031   25306 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key
	I1014 13:55:34.811055   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 13:55:34.811078   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 13:55:34.811100   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 13:55:34.811122   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 13:55:34.811141   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 13:55:34.811162   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 13:55:34.811184   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 13:55:34.811205   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 13:55:34.811281   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 13:55:34.811405   25306 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 13:55:34.811439   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 13:55:34.811482   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 13:55:34.811508   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 13:55:34.811530   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 13:55:34.811573   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 13:55:34.811602   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:55:34.811623   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem -> /usr/share/ca-certificates/15023.pem
	I1014 13:55:34.811635   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> /usr/share/ca-certificates/150232.pem
	I1014 13:55:34.811667   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:55:34.814657   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:55:34.815058   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:55:34.815083   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:55:34.815262   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:55:34.815417   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:55:34.815552   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:55:34.815647   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 13:55:34.891004   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1014 13:55:34.895702   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1014 13:55:34.906613   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1014 13:55:34.910438   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1014 13:55:34.923172   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1014 13:55:34.928434   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1014 13:55:34.941440   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1014 13:55:34.946469   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1014 13:55:34.957168   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1014 13:55:34.961259   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1014 13:55:34.972556   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1014 13:55:34.980332   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1014 13:55:34.991839   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 13:55:35.019053   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 13:55:35.043395   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 13:55:35.066158   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 13:55:35.088175   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1014 13:55:35.110925   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 13:55:35.134916   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 13:55:35.158129   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 13:55:35.180405   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 13:55:35.202548   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 13:55:35.225992   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 13:55:35.249981   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1014 13:55:35.266180   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1014 13:55:35.282687   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1014 13:55:35.299271   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1014 13:55:35.316623   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1014 13:55:35.332853   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1014 13:55:35.348570   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1014 13:55:35.364739   25306 ssh_runner.go:195] Run: openssl version
	I1014 13:55:35.370372   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 13:55:35.380736   25306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 13:55:35.385152   25306 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 13:55:35.385211   25306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 13:55:35.390839   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 13:55:35.401523   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 13:55:35.412185   25306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:55:35.416457   25306 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:55:35.416547   25306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:55:35.421940   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 13:55:35.432212   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 13:55:35.442100   25306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 13:55:35.446159   25306 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 13:55:35.446196   25306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 13:55:35.451427   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 13:55:35.461211   25306 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 13:55:35.465126   25306 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 13:55:35.465175   25306 kubeadm.go:934] updating node {m02 192.168.39.89 8443 v1.31.1 crio true true} ...
	I1014 13:55:35.465273   25306 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-450021-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 13:55:35.465315   25306 kube-vip.go:115] generating kube-vip config ...
	I1014 13:55:35.465353   25306 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1014 13:55:35.480860   25306 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1014 13:55:35.480912   25306 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1014 13:55:35.480953   25306 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 13:55:35.489708   25306 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1014 13:55:35.489755   25306 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1014 13:55:35.498478   25306 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1014 13:55:35.498498   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1014 13:55:35.498541   25306 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1014 13:55:35.498556   25306 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1014 13:55:35.498585   25306 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1014 13:55:35.502947   25306 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1014 13:55:35.502966   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1014 13:55:36.107052   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1014 13:55:36.107146   25306 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1014 13:55:36.112161   25306 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1014 13:55:36.112193   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1014 13:55:36.135646   25306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 13:55:36.156399   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1014 13:55:36.156509   25306 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1014 13:55:36.173587   25306 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1014 13:55:36.173634   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1014 13:55:36.629216   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1014 13:55:36.638544   25306 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1014 13:55:36.654373   25306 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 13:55:36.670100   25306 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1014 13:55:36.685420   25306 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1014 13:55:36.689062   25306 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 13:55:36.700413   25306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:55:36.822396   25306 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 13:55:36.840300   25306 host.go:66] Checking if "ha-450021" exists ...
	I1014 13:55:36.840777   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:55:36.840820   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:55:36.856367   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35101
	I1014 13:55:36.856879   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:55:36.857323   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:55:36.857351   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:55:36.857672   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:55:36.857841   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:55:36.857975   25306 start.go:317] joinCluster: &{Name:ha-450021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 13:55:36.858071   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1014 13:55:36.858091   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:55:36.860736   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:55:36.861146   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:55:36.861185   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:55:36.861337   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:55:36.861529   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:55:36.861694   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:55:36.861807   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 13:55:37.015771   25306 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 13:55:37.015819   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token n1vmb9.g7muq8my4o5hlpei --discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-450021-m02 --control-plane --apiserver-advertise-address=192.168.39.89 --apiserver-bind-port=8443"
	I1014 13:55:58.710606   25306 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token n1vmb9.g7muq8my4o5hlpei --discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-450021-m02 --control-plane --apiserver-advertise-address=192.168.39.89 --apiserver-bind-port=8443": (21.694741621s)
	I1014 13:55:58.710647   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1014 13:55:59.236903   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-450021-m02 minikube.k8s.io/updated_at=2024_10_14T13_55_59_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=ha-450021 minikube.k8s.io/primary=false
	I1014 13:55:59.350641   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-450021-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1014 13:55:59.452342   25306 start.go:319] duration metric: took 22.5943626s to joinCluster
	I1014 13:55:59.452418   25306 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 13:55:59.452735   25306 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:55:59.453925   25306 out.go:177] * Verifying Kubernetes components...
	I1014 13:55:59.454985   25306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:55:59.700035   25306 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 13:55:59.782880   25306 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 13:55:59.783215   25306 kapi.go:59] client config for ha-450021: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.crt", KeyFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.key", CAFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432aa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1014 13:55:59.783307   25306 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.176:8443
	I1014 13:55:59.783576   25306 node_ready.go:35] waiting up to 6m0s for node "ha-450021-m02" to be "Ready" ...
	I1014 13:55:59.783682   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:55:59.783696   25306 round_trippers.go:469] Request Headers:
	I1014 13:55:59.783707   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:55:59.783718   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:55:59.796335   25306 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1014 13:56:00.284246   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:00.284269   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:00.284281   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:00.284288   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:00.300499   25306 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I1014 13:56:00.784180   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:00.784204   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:00.784212   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:00.784217   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:00.811652   25306 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I1014 13:56:01.284849   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:01.284881   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:01.284893   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:01.284898   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:01.288918   25306 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 13:56:01.783917   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:01.783937   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:01.783945   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:01.783949   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:01.787799   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:01.788614   25306 node_ready.go:53] node "ha-450021-m02" has status "Ready":"False"
	I1014 13:56:02.284602   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:02.284624   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:02.284632   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:02.284642   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:02.290773   25306 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 13:56:02.783789   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:02.783815   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:02.783826   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:02.783831   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:02.788075   25306 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 13:56:03.284032   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:03.284057   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:03.284068   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:03.284074   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:03.287614   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:03.783925   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:03.783945   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:03.783953   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:03.783956   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:03.788205   25306 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 13:56:03.788893   25306 node_ready.go:53] node "ha-450021-m02" has status "Ready":"False"
	I1014 13:56:04.283968   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:04.283987   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:04.283995   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:04.283999   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:04.287325   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:04.784192   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:04.784212   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:04.784219   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:04.784225   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:04.787474   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:05.284787   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:05.284804   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:05.284813   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:05.284815   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:05.293558   25306 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1014 13:56:05.784473   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:05.784495   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:05.784505   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:05.784509   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:05.787964   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:06.283912   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:06.283936   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:06.283946   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:06.283954   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:06.286733   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:56:06.287200   25306 node_ready.go:53] node "ha-450021-m02" has status "Ready":"False"
	I1014 13:56:06.784670   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:06.784694   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:06.784706   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:06.784711   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:06.788422   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:07.283873   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:07.283901   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:07.283913   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:07.283918   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:07.286693   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:56:07.784588   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:07.784609   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:07.784617   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:07.784621   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:07.787856   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:08.284107   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:08.284126   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:08.284134   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:08.284138   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:08.287096   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:56:08.287719   25306 node_ready.go:53] node "ha-450021-m02" has status "Ready":"False"
	I1014 13:56:08.784096   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:08.784116   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:08.784124   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:08.784127   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:08.787645   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:09.284728   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:09.284752   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:09.284759   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:09.284764   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:09.288184   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:09.784057   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:09.784097   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:09.784108   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:09.784122   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:09.793007   25306 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1014 13:56:10.284378   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:10.284400   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:10.284408   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:10.284413   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:10.287852   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:10.288463   25306 node_ready.go:53] node "ha-450021-m02" has status "Ready":"False"
	I1014 13:56:10.783831   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:10.783850   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:10.783858   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:10.783862   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:10.787590   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:11.284759   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:11.284781   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:11.284790   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:11.284794   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:11.287610   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:56:11.784640   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:11.784659   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:11.784667   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:11.784672   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:11.787776   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:12.283968   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:12.283997   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:12.284009   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:12.284014   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:12.289974   25306 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 13:56:12.290779   25306 node_ready.go:53] node "ha-450021-m02" has status "Ready":"False"
	I1014 13:56:12.784021   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:12.784047   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:12.784061   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:12.784069   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:12.787917   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:13.283870   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:13.283893   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:13.283901   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:13.283905   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:13.287328   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:13.784620   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:13.784644   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:13.784653   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:13.784657   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:13.787810   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:14.283867   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:14.283892   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:14.283900   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:14.283905   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:14.287541   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:14.784419   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:14.784440   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:14.784447   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:14.784450   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:14.787853   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:14.788359   25306 node_ready.go:53] node "ha-450021-m02" has status "Ready":"False"
	I1014 13:56:15.284687   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:15.284709   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.284720   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.284726   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.287861   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:15.288461   25306 node_ready.go:49] node "ha-450021-m02" has status "Ready":"True"
	I1014 13:56:15.288480   25306 node_ready.go:38] duration metric: took 15.504881835s for node "ha-450021-m02" to be "Ready" ...
	I1014 13:56:15.288487   25306 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 13:56:15.288543   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods
	I1014 13:56:15.288553   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.288559   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.288563   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.292417   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:15.298105   25306 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-btfml" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.298175   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-btfml
	I1014 13:56:15.298182   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.298189   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.298194   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.300838   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:56:15.301679   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:15.301692   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.301699   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.301703   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.304037   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:56:15.304599   25306 pod_ready.go:93] pod "coredns-7c65d6cfc9-btfml" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:15.304614   25306 pod_ready.go:82] duration metric: took 6.489417ms for pod "coredns-7c65d6cfc9-btfml" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.304622   25306 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-h5s6h" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.304661   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-h5s6h
	I1014 13:56:15.304669   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.304683   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.304694   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.306880   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:56:15.307573   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:15.307590   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.307600   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.307610   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.309331   25306 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1014 13:56:15.309944   25306 pod_ready.go:93] pod "coredns-7c65d6cfc9-h5s6h" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:15.309963   25306 pod_ready.go:82] duration metric: took 5.334499ms for pod "coredns-7c65d6cfc9-h5s6h" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.309975   25306 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.310021   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/etcd-ha-450021
	I1014 13:56:15.310032   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.310044   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.310060   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.312281   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:56:15.312954   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:15.312972   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.312984   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.312989   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.314997   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:56:15.315561   25306 pod_ready.go:93] pod "etcd-ha-450021" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:15.315581   25306 pod_ready.go:82] duration metric: took 5.597491ms for pod "etcd-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.315592   25306 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.315648   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/etcd-ha-450021-m02
	I1014 13:56:15.315660   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.315671   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.315680   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.317496   25306 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1014 13:56:15.318188   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:15.318205   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.318217   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.318224   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.320143   25306 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1014 13:56:15.320663   25306 pod_ready.go:93] pod "etcd-ha-450021-m02" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:15.320681   25306 pod_ready.go:82] duration metric: took 5.077444ms for pod "etcd-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.320700   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.485053   25306 request.go:632] Waited for 164.298634ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450021
	I1014 13:56:15.485113   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450021
	I1014 13:56:15.485118   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.485126   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.485130   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.488373   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:15.685383   25306 request.go:632] Waited for 196.403765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:15.685451   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:15.685458   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.685469   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.685478   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.688990   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:15.689603   25306 pod_ready.go:93] pod "kube-apiserver-ha-450021" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:15.689627   25306 pod_ready.go:82] duration metric: took 368.913108ms for pod "kube-apiserver-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.689641   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.885558   25306 request.go:632] Waited for 195.846701ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450021-m02
	I1014 13:56:15.885605   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450021-m02
	I1014 13:56:15.885611   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.885618   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.885623   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.889124   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:16.084785   25306 request.go:632] Waited for 194.38123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:16.084840   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:16.084845   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:16.084853   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:16.084857   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:16.088301   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:16.088998   25306 pod_ready.go:93] pod "kube-apiserver-ha-450021-m02" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:16.089015   25306 pod_ready.go:82] duration metric: took 399.36552ms for pod "kube-apiserver-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:16.089025   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:16.285209   25306 request.go:632] Waited for 196.12444ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450021
	I1014 13:56:16.285293   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450021
	I1014 13:56:16.285302   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:16.285313   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:16.285319   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:16.289023   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:16.485127   25306 request.go:632] Waited for 195.353812ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:16.485198   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:16.485212   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:16.485224   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:16.485231   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:16.488483   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:16.489170   25306 pod_ready.go:93] pod "kube-controller-manager-ha-450021" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:16.489190   25306 pod_ready.go:82] duration metric: took 400.158231ms for pod "kube-controller-manager-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:16.489202   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:16.685336   25306 request.go:632] Waited for 196.062822ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450021-m02
	I1014 13:56:16.685418   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450021-m02
	I1014 13:56:16.685429   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:16.685440   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:16.685449   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:16.688757   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:16.884883   25306 request.go:632] Waited for 195.393841ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:16.884933   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:16.884937   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:16.884945   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:16.884950   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:16.888074   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:16.888564   25306 pod_ready.go:93] pod "kube-controller-manager-ha-450021-m02" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:16.888582   25306 pod_ready.go:82] duration metric: took 399.371713ms for pod "kube-controller-manager-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:16.888594   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dmbpv" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:17.084731   25306 request.go:632] Waited for 196.036159ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dmbpv
	I1014 13:56:17.084792   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dmbpv
	I1014 13:56:17.084799   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:17.084811   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:17.084818   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:17.088594   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:17.284774   25306 request.go:632] Waited for 195.293808ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:17.284866   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:17.284878   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:17.284889   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:17.284900   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:17.288050   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:17.288623   25306 pod_ready.go:93] pod "kube-proxy-dmbpv" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:17.288647   25306 pod_ready.go:82] duration metric: took 400.044261ms for pod "kube-proxy-dmbpv" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:17.288659   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-v24tf" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:17.485648   25306 request.go:632] Waited for 196.912408ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v24tf
	I1014 13:56:17.485723   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v24tf
	I1014 13:56:17.485734   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:17.485744   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:17.485752   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:17.488420   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:56:17.685402   25306 request.go:632] Waited for 196.37897ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:17.685455   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:17.685460   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:17.685467   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:17.685471   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:17.689419   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:17.690366   25306 pod_ready.go:93] pod "kube-proxy-v24tf" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:17.690386   25306 pod_ready.go:82] duration metric: took 401.717488ms for pod "kube-proxy-v24tf" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:17.690395   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:17.885498   25306 request.go:632] Waited for 195.043697ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450021
	I1014 13:56:17.885563   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450021
	I1014 13:56:17.885569   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:17.885576   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:17.885581   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:17.888648   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:18.085570   25306 request.go:632] Waited for 196.366356ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:18.085639   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:18.085649   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:18.085660   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:18.085668   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:18.088834   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:18.089495   25306 pod_ready.go:93] pod "kube-scheduler-ha-450021" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:18.089519   25306 pod_ready.go:82] duration metric: took 399.116695ms for pod "kube-scheduler-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:18.089532   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:18.285606   25306 request.go:632] Waited for 196.011378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450021-m02
	I1014 13:56:18.285677   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450021-m02
	I1014 13:56:18.285685   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:18.285693   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:18.285699   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:18.288947   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:18.484902   25306 request.go:632] Waited for 195.327209ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:18.484963   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:18.484970   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:18.484981   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:18.484989   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:18.488080   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:18.488592   25306 pod_ready.go:93] pod "kube-scheduler-ha-450021-m02" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:18.488612   25306 pod_ready.go:82] duration metric: took 399.071687ms for pod "kube-scheduler-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:18.488628   25306 pod_ready.go:39] duration metric: took 3.200130009s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 13:56:18.488645   25306 api_server.go:52] waiting for apiserver process to appear ...
	I1014 13:56:18.488706   25306 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 13:56:18.504222   25306 api_server.go:72] duration metric: took 19.051768004s to wait for apiserver process to appear ...
	I1014 13:56:18.504252   25306 api_server.go:88] waiting for apiserver healthz status ...
	I1014 13:56:18.504274   25306 api_server.go:253] Checking apiserver healthz at https://192.168.39.176:8443/healthz ...
	I1014 13:56:18.508419   25306 api_server.go:279] https://192.168.39.176:8443/healthz returned 200:
	ok
	I1014 13:56:18.508480   25306 round_trippers.go:463] GET https://192.168.39.176:8443/version
	I1014 13:56:18.508494   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:18.508504   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:18.508511   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:18.509353   25306 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1014 13:56:18.509470   25306 api_server.go:141] control plane version: v1.31.1
	I1014 13:56:18.509489   25306 api_server.go:131] duration metric: took 5.230064ms to wait for apiserver health ...
	I1014 13:56:18.509499   25306 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 13:56:18.684863   25306 request.go:632] Waited for 175.279951ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods
	I1014 13:56:18.684960   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods
	I1014 13:56:18.684974   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:18.684985   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:18.684994   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:18.691157   25306 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 13:56:18.697135   25306 system_pods.go:59] 17 kube-system pods found
	I1014 13:56:18.697234   25306 system_pods.go:61] "coredns-7c65d6cfc9-btfml" [292e08ef-5eec-4ebb-acf5-5b4b03e47724] Running
	I1014 13:56:18.697252   25306 system_pods.go:61] "coredns-7c65d6cfc9-h5s6h" [bf78614c-8f22-48f9-8a16-cfcffecadfcc] Running
	I1014 13:56:18.697264   25306 system_pods.go:61] "etcd-ha-450021" [d3e4a252-6d4a-4617-99f8-416ddaa8e695] Running
	I1014 13:56:18.697271   25306 system_pods.go:61] "etcd-ha-450021-m02" [d890c5b4-c756-42a4-a549-59b46d9fa0f6] Running
	I1014 13:56:18.697279   25306 system_pods.go:61] "kindnet-2ghzc" [f725a811-6a0e-433c-913d-079b7bc4742f] Running
	I1014 13:56:18.697284   25306 system_pods.go:61] "kindnet-c2xkn" [0f821123-80f9-4fe5-b64c-fb641ec185ea] Running
	I1014 13:56:18.697290   25306 system_pods.go:61] "kube-apiserver-ha-450021" [3c355a29-9ac5-466a-974f-22fc58429b98] Running
	I1014 13:56:18.697299   25306 system_pods.go:61] "kube-apiserver-ha-450021-m02" [5e9f016e-2b42-4301-964f-8e2af49d0d08] Running
	I1014 13:56:18.697305   25306 system_pods.go:61] "kube-controller-manager-ha-450021" [b002ddcb-0bb2-44f5-a779-20df99f3cab5] Running
	I1014 13:56:18.697314   25306 system_pods.go:61] "kube-controller-manager-ha-450021-m02" [f7be35b1-380c-4f40-a1d6-5522b961917c] Running
	I1014 13:56:18.697319   25306 system_pods.go:61] "kube-proxy-dmbpv" [e09737a1-c663-4951-b6cb-c0690fbd8153] Running
	I1014 13:56:18.697328   25306 system_pods.go:61] "kube-proxy-v24tf" [49b626fc-4017-45f7-a44f-43f3b311d0e0] Running
	I1014 13:56:18.697334   25306 system_pods.go:61] "kube-scheduler-ha-450021" [2f216272-b604-4f1c-ad4b-fdb874a78cf4] Running
	I1014 13:56:18.697340   25306 system_pods.go:61] "kube-scheduler-ha-450021-m02" [cfa4bb4e-6a32-4b4b-85df-2c7b1a356a4a] Running
	I1014 13:56:18.697345   25306 system_pods.go:61] "kube-vip-ha-450021" [e5340482-7ea5-4299-8096-a2f292c4bfdd] Running
	I1014 13:56:18.697350   25306 system_pods.go:61] "kube-vip-ha-450021-m02" [6a409d8d-9566-4caa-af5a-0dbe7b9f6cec] Running
	I1014 13:56:18.697356   25306 system_pods.go:61] "storage-provisioner" [1377adb3-3faf-4dee-a86e-9c4544a02d51] Running
	I1014 13:56:18.697364   25306 system_pods.go:74] duration metric: took 187.854432ms to wait for pod list to return data ...
	I1014 13:56:18.697375   25306 default_sa.go:34] waiting for default service account to be created ...
	I1014 13:56:18.884741   25306 request.go:632] Waited for 187.279644ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/default/serviceaccounts
	I1014 13:56:18.884797   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/default/serviceaccounts
	I1014 13:56:18.884802   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:18.884809   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:18.884813   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:18.888582   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:18.888812   25306 default_sa.go:45] found service account: "default"
	I1014 13:56:18.888830   25306 default_sa.go:55] duration metric: took 191.448571ms for default service account to be created ...
	I1014 13:56:18.888841   25306 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 13:56:19.085294   25306 request.go:632] Waited for 196.363765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods
	I1014 13:56:19.085358   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods
	I1014 13:56:19.085366   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:19.085377   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:19.085383   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:19.092864   25306 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1014 13:56:19.097323   25306 system_pods.go:86] 17 kube-system pods found
	I1014 13:56:19.097351   25306 system_pods.go:89] "coredns-7c65d6cfc9-btfml" [292e08ef-5eec-4ebb-acf5-5b4b03e47724] Running
	I1014 13:56:19.097357   25306 system_pods.go:89] "coredns-7c65d6cfc9-h5s6h" [bf78614c-8f22-48f9-8a16-cfcffecadfcc] Running
	I1014 13:56:19.097362   25306 system_pods.go:89] "etcd-ha-450021" [d3e4a252-6d4a-4617-99f8-416ddaa8e695] Running
	I1014 13:56:19.097366   25306 system_pods.go:89] "etcd-ha-450021-m02" [d890c5b4-c756-42a4-a549-59b46d9fa0f6] Running
	I1014 13:56:19.097370   25306 system_pods.go:89] "kindnet-2ghzc" [f725a811-6a0e-433c-913d-079b7bc4742f] Running
	I1014 13:56:19.097374   25306 system_pods.go:89] "kindnet-c2xkn" [0f821123-80f9-4fe5-b64c-fb641ec185ea] Running
	I1014 13:56:19.097377   25306 system_pods.go:89] "kube-apiserver-ha-450021" [3c355a29-9ac5-466a-974f-22fc58429b98] Running
	I1014 13:56:19.097382   25306 system_pods.go:89] "kube-apiserver-ha-450021-m02" [5e9f016e-2b42-4301-964f-8e2af49d0d08] Running
	I1014 13:56:19.097387   25306 system_pods.go:89] "kube-controller-manager-ha-450021" [b002ddcb-0bb2-44f5-a779-20df99f3cab5] Running
	I1014 13:56:19.097390   25306 system_pods.go:89] "kube-controller-manager-ha-450021-m02" [f7be35b1-380c-4f40-a1d6-5522b961917c] Running
	I1014 13:56:19.097394   25306 system_pods.go:89] "kube-proxy-dmbpv" [e09737a1-c663-4951-b6cb-c0690fbd8153] Running
	I1014 13:56:19.097398   25306 system_pods.go:89] "kube-proxy-v24tf" [49b626fc-4017-45f7-a44f-43f3b311d0e0] Running
	I1014 13:56:19.097402   25306 system_pods.go:89] "kube-scheduler-ha-450021" [2f216272-b604-4f1c-ad4b-fdb874a78cf4] Running
	I1014 13:56:19.097411   25306 system_pods.go:89] "kube-scheduler-ha-450021-m02" [cfa4bb4e-6a32-4b4b-85df-2c7b1a356a4a] Running
	I1014 13:56:19.097417   25306 system_pods.go:89] "kube-vip-ha-450021" [e5340482-7ea5-4299-8096-a2f292c4bfdd] Running
	I1014 13:56:19.097420   25306 system_pods.go:89] "kube-vip-ha-450021-m02" [6a409d8d-9566-4caa-af5a-0dbe7b9f6cec] Running
	I1014 13:56:19.097423   25306 system_pods.go:89] "storage-provisioner" [1377adb3-3faf-4dee-a86e-9c4544a02d51] Running
	I1014 13:56:19.097429   25306 system_pods.go:126] duration metric: took 208.581366ms to wait for k8s-apps to be running ...
	I1014 13:56:19.097436   25306 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 13:56:19.097477   25306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 13:56:19.112071   25306 system_svc.go:56] duration metric: took 14.628482ms WaitForService to wait for kubelet
	I1014 13:56:19.112097   25306 kubeadm.go:582] duration metric: took 19.659648051s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 13:56:19.112113   25306 node_conditions.go:102] verifying NodePressure condition ...
	I1014 13:56:19.285537   25306 request.go:632] Waited for 173.355083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes
	I1014 13:56:19.285629   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes
	I1014 13:56:19.285637   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:19.285649   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:19.285654   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:19.289726   25306 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 13:56:19.290673   25306 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 13:56:19.290698   25306 node_conditions.go:123] node cpu capacity is 2
	I1014 13:56:19.290712   25306 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 13:56:19.290717   25306 node_conditions.go:123] node cpu capacity is 2
	I1014 13:56:19.290723   25306 node_conditions.go:105] duration metric: took 178.605419ms to run NodePressure ...
	I1014 13:56:19.290740   25306 start.go:241] waiting for startup goroutines ...
	I1014 13:56:19.290784   25306 start.go:255] writing updated cluster config ...
	I1014 13:56:19.292978   25306 out.go:201] 
	I1014 13:56:19.294410   25306 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:56:19.294496   25306 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/config.json ...
	I1014 13:56:19.296041   25306 out.go:177] * Starting "ha-450021-m03" control-plane node in "ha-450021" cluster
	I1014 13:56:19.297096   25306 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 13:56:19.297116   25306 cache.go:56] Caching tarball of preloaded images
	I1014 13:56:19.297204   25306 preload.go:172] Found /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 13:56:19.297214   25306 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1014 13:56:19.297292   25306 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/config.json ...
	I1014 13:56:19.297485   25306 start.go:360] acquireMachinesLock for ha-450021-m03: {Name:mk0b928e3a7c606d1470b2064477a22c5e59969d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 13:56:19.297521   25306 start.go:364] duration metric: took 20.106µs to acquireMachinesLock for "ha-450021-m03"
	I1014 13:56:19.297537   25306 start.go:93] Provisioning new machine with config: &{Name:ha-450021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 13:56:19.297616   25306 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1014 13:56:19.299122   25306 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 13:56:19.299222   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:56:19.299255   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:56:19.313918   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33835
	I1014 13:56:19.314305   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:56:19.314837   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:56:19.314851   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:56:19.315181   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:56:19.315347   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetMachineName
	I1014 13:56:19.315509   25306 main.go:141] libmachine: (ha-450021-m03) Calling .DriverName
	I1014 13:56:19.315639   25306 start.go:159] libmachine.API.Create for "ha-450021" (driver="kvm2")
	I1014 13:56:19.315670   25306 client.go:168] LocalClient.Create starting
	I1014 13:56:19.315704   25306 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem
	I1014 13:56:19.315748   25306 main.go:141] libmachine: Decoding PEM data...
	I1014 13:56:19.315768   25306 main.go:141] libmachine: Parsing certificate...
	I1014 13:56:19.315834   25306 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem
	I1014 13:56:19.315859   25306 main.go:141] libmachine: Decoding PEM data...
	I1014 13:56:19.315870   25306 main.go:141] libmachine: Parsing certificate...
	I1014 13:56:19.315884   25306 main.go:141] libmachine: Running pre-create checks...
	I1014 13:56:19.315892   25306 main.go:141] libmachine: (ha-450021-m03) Calling .PreCreateCheck
	I1014 13:56:19.316068   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetConfigRaw
	I1014 13:56:19.316425   25306 main.go:141] libmachine: Creating machine...
	I1014 13:56:19.316438   25306 main.go:141] libmachine: (ha-450021-m03) Calling .Create
	I1014 13:56:19.316586   25306 main.go:141] libmachine: (ha-450021-m03) Creating KVM machine...
	I1014 13:56:19.317686   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found existing default KVM network
	I1014 13:56:19.317799   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found existing private KVM network mk-ha-450021
	I1014 13:56:19.317961   25306 main.go:141] libmachine: (ha-450021-m03) Setting up store path in /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03 ...
	I1014 13:56:19.317988   25306 main.go:141] libmachine: (ha-450021-m03) Building disk image from file:///home/jenkins/minikube-integration/19790-7836/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1014 13:56:19.318035   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:19.317950   26053 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 13:56:19.318138   25306 main.go:141] libmachine: (ha-450021-m03) Downloading /home/jenkins/minikube-integration/19790-7836/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19790-7836/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1014 13:56:19.552577   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:19.552461   26053 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/id_rsa...
	I1014 13:56:19.731749   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:19.731620   26053 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/ha-450021-m03.rawdisk...
	I1014 13:56:19.731783   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Writing magic tar header
	I1014 13:56:19.731797   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Writing SSH key tar header
	I1014 13:56:19.731814   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:19.731727   26053 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03 ...
	I1014 13:56:19.731831   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03
	I1014 13:56:19.731859   25306 main.go:141] libmachine: (ha-450021-m03) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03 (perms=drwx------)
	I1014 13:56:19.731873   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube/machines
	I1014 13:56:19.731885   25306 main.go:141] libmachine: (ha-450021-m03) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube/machines (perms=drwxr-xr-x)
	I1014 13:56:19.731899   25306 main.go:141] libmachine: (ha-450021-m03) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube (perms=drwxr-xr-x)
	I1014 13:56:19.731913   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 13:56:19.731942   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836
	I1014 13:56:19.731955   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1014 13:56:19.731964   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Checking permissions on dir: /home/jenkins
	I1014 13:56:19.731973   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Checking permissions on dir: /home
	I1014 13:56:19.731985   25306 main.go:141] libmachine: (ha-450021-m03) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836 (perms=drwxrwxr-x)
	I1014 13:56:19.732001   25306 main.go:141] libmachine: (ha-450021-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1014 13:56:19.732012   25306 main.go:141] libmachine: (ha-450021-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1014 13:56:19.732026   25306 main.go:141] libmachine: (ha-450021-m03) Creating domain...
	I1014 13:56:19.732040   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Skipping /home - not owner
	I1014 13:56:19.732949   25306 main.go:141] libmachine: (ha-450021-m03) define libvirt domain using xml: 
	I1014 13:56:19.732973   25306 main.go:141] libmachine: (ha-450021-m03) <domain type='kvm'>
	I1014 13:56:19.732984   25306 main.go:141] libmachine: (ha-450021-m03)   <name>ha-450021-m03</name>
	I1014 13:56:19.732992   25306 main.go:141] libmachine: (ha-450021-m03)   <memory unit='MiB'>2200</memory>
	I1014 13:56:19.733004   25306 main.go:141] libmachine: (ha-450021-m03)   <vcpu>2</vcpu>
	I1014 13:56:19.733014   25306 main.go:141] libmachine: (ha-450021-m03)   <features>
	I1014 13:56:19.733021   25306 main.go:141] libmachine: (ha-450021-m03)     <acpi/>
	I1014 13:56:19.733031   25306 main.go:141] libmachine: (ha-450021-m03)     <apic/>
	I1014 13:56:19.733038   25306 main.go:141] libmachine: (ha-450021-m03)     <pae/>
	I1014 13:56:19.733044   25306 main.go:141] libmachine: (ha-450021-m03)     
	I1014 13:56:19.733056   25306 main.go:141] libmachine: (ha-450021-m03)   </features>
	I1014 13:56:19.733071   25306 main.go:141] libmachine: (ha-450021-m03)   <cpu mode='host-passthrough'>
	I1014 13:56:19.733081   25306 main.go:141] libmachine: (ha-450021-m03)   
	I1014 13:56:19.733089   25306 main.go:141] libmachine: (ha-450021-m03)   </cpu>
	I1014 13:56:19.733099   25306 main.go:141] libmachine: (ha-450021-m03)   <os>
	I1014 13:56:19.733106   25306 main.go:141] libmachine: (ha-450021-m03)     <type>hvm</type>
	I1014 13:56:19.733117   25306 main.go:141] libmachine: (ha-450021-m03)     <boot dev='cdrom'/>
	I1014 13:56:19.733126   25306 main.go:141] libmachine: (ha-450021-m03)     <boot dev='hd'/>
	I1014 13:56:19.733136   25306 main.go:141] libmachine: (ha-450021-m03)     <bootmenu enable='no'/>
	I1014 13:56:19.733151   25306 main.go:141] libmachine: (ha-450021-m03)   </os>
	I1014 13:56:19.733160   25306 main.go:141] libmachine: (ha-450021-m03)   <devices>
	I1014 13:56:19.733169   25306 main.go:141] libmachine: (ha-450021-m03)     <disk type='file' device='cdrom'>
	I1014 13:56:19.733183   25306 main.go:141] libmachine: (ha-450021-m03)       <source file='/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/boot2docker.iso'/>
	I1014 13:56:19.733196   25306 main.go:141] libmachine: (ha-450021-m03)       <target dev='hdc' bus='scsi'/>
	I1014 13:56:19.733209   25306 main.go:141] libmachine: (ha-450021-m03)       <readonly/>
	I1014 13:56:19.733218   25306 main.go:141] libmachine: (ha-450021-m03)     </disk>
	I1014 13:56:19.733227   25306 main.go:141] libmachine: (ha-450021-m03)     <disk type='file' device='disk'>
	I1014 13:56:19.733239   25306 main.go:141] libmachine: (ha-450021-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1014 13:56:19.733252   25306 main.go:141] libmachine: (ha-450021-m03)       <source file='/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/ha-450021-m03.rawdisk'/>
	I1014 13:56:19.733266   25306 main.go:141] libmachine: (ha-450021-m03)       <target dev='hda' bus='virtio'/>
	I1014 13:56:19.733278   25306 main.go:141] libmachine: (ha-450021-m03)     </disk>
	I1014 13:56:19.733286   25306 main.go:141] libmachine: (ha-450021-m03)     <interface type='network'>
	I1014 13:56:19.733298   25306 main.go:141] libmachine: (ha-450021-m03)       <source network='mk-ha-450021'/>
	I1014 13:56:19.733306   25306 main.go:141] libmachine: (ha-450021-m03)       <model type='virtio'/>
	I1014 13:56:19.733315   25306 main.go:141] libmachine: (ha-450021-m03)     </interface>
	I1014 13:56:19.733325   25306 main.go:141] libmachine: (ha-450021-m03)     <interface type='network'>
	I1014 13:56:19.733356   25306 main.go:141] libmachine: (ha-450021-m03)       <source network='default'/>
	I1014 13:56:19.733373   25306 main.go:141] libmachine: (ha-450021-m03)       <model type='virtio'/>
	I1014 13:56:19.733379   25306 main.go:141] libmachine: (ha-450021-m03)     </interface>
	I1014 13:56:19.733383   25306 main.go:141] libmachine: (ha-450021-m03)     <serial type='pty'>
	I1014 13:56:19.733387   25306 main.go:141] libmachine: (ha-450021-m03)       <target port='0'/>
	I1014 13:56:19.733394   25306 main.go:141] libmachine: (ha-450021-m03)     </serial>
	I1014 13:56:19.733399   25306 main.go:141] libmachine: (ha-450021-m03)     <console type='pty'>
	I1014 13:56:19.733403   25306 main.go:141] libmachine: (ha-450021-m03)       <target type='serial' port='0'/>
	I1014 13:56:19.733410   25306 main.go:141] libmachine: (ha-450021-m03)     </console>
	I1014 13:56:19.733415   25306 main.go:141] libmachine: (ha-450021-m03)     <rng model='virtio'>
	I1014 13:56:19.733430   25306 main.go:141] libmachine: (ha-450021-m03)       <backend model='random'>/dev/random</backend>
	I1014 13:56:19.733436   25306 main.go:141] libmachine: (ha-450021-m03)     </rng>
	I1014 13:56:19.733441   25306 main.go:141] libmachine: (ha-450021-m03)     
	I1014 13:56:19.733445   25306 main.go:141] libmachine: (ha-450021-m03)     
	I1014 13:56:19.733449   25306 main.go:141] libmachine: (ha-450021-m03)   </devices>
	I1014 13:56:19.733455   25306 main.go:141] libmachine: (ha-450021-m03) </domain>
	I1014 13:56:19.733462   25306 main.go:141] libmachine: (ha-450021-m03) 
	I1014 13:56:19.740127   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:3e:d5:3c in network default
	I1014 13:56:19.740688   25306 main.go:141] libmachine: (ha-450021-m03) Ensuring networks are active...
	I1014 13:56:19.740710   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:19.741382   25306 main.go:141] libmachine: (ha-450021-m03) Ensuring network default is active
	I1014 13:56:19.741753   25306 main.go:141] libmachine: (ha-450021-m03) Ensuring network mk-ha-450021 is active
	I1014 13:56:19.742099   25306 main.go:141] libmachine: (ha-450021-m03) Getting domain xml...
	I1014 13:56:19.742834   25306 main.go:141] libmachine: (ha-450021-m03) Creating domain...
	I1014 13:56:21.010084   25306 main.go:141] libmachine: (ha-450021-m03) Waiting to get IP...
	I1014 13:56:21.010944   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:21.011316   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:21.011377   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:21.011315   26053 retry.go:31] will retry after 306.133794ms: waiting for machine to come up
	I1014 13:56:21.318826   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:21.319333   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:21.319361   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:21.319280   26053 retry.go:31] will retry after 366.66223ms: waiting for machine to come up
	I1014 13:56:21.687816   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:21.688312   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:21.688353   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:21.688274   26053 retry.go:31] will retry after 390.93754ms: waiting for machine to come up
	I1014 13:56:22.080797   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:22.081263   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:22.081290   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:22.081223   26053 retry.go:31] will retry after 398.805239ms: waiting for machine to come up
	I1014 13:56:22.481851   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:22.482319   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:22.482343   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:22.482287   26053 retry.go:31] will retry after 640.042779ms: waiting for machine to come up
	I1014 13:56:23.123714   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:23.124086   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:23.124144   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:23.124073   26053 retry.go:31] will retry after 920.9874ms: waiting for machine to come up
	I1014 13:56:24.047070   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:24.047392   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:24.047414   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:24.047351   26053 retry.go:31] will retry after 897.422021ms: waiting for machine to come up
	I1014 13:56:24.946948   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:24.947347   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:24.947372   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:24.947310   26053 retry.go:31] will retry after 1.40276044s: waiting for machine to come up
	I1014 13:56:26.351855   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:26.352313   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:26.352340   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:26.352279   26053 retry.go:31] will retry after 1.726907493s: waiting for machine to come up
	I1014 13:56:28.080396   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:28.080846   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:28.080875   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:28.080790   26053 retry.go:31] will retry after 1.482180268s: waiting for machine to come up
	I1014 13:56:29.564825   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:29.565318   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:29.565340   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:29.565288   26053 retry.go:31] will retry after 2.541525756s: waiting for machine to come up
	I1014 13:56:32.109990   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:32.110440   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:32.110469   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:32.110395   26053 retry.go:31] will retry after 2.914830343s: waiting for machine to come up
	I1014 13:56:35.026789   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:35.027206   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:35.027240   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:35.027152   26053 retry.go:31] will retry after 3.572900713s: waiting for machine to come up
	I1014 13:56:38.603496   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:38.603914   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:38.603943   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:38.603867   26053 retry.go:31] will retry after 3.566960315s: waiting for machine to come up
	I1014 13:56:42.173796   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:42.174271   25306 main.go:141] libmachine: (ha-450021-m03) Found IP for machine: 192.168.39.55
	I1014 13:56:42.174288   25306 main.go:141] libmachine: (ha-450021-m03) Reserving static IP address...
	I1014 13:56:42.174301   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has current primary IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:42.174679   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find host DHCP lease matching {name: "ha-450021-m03", mac: "52:54:00:af:04:2c", ip: "192.168.39.55"} in network mk-ha-450021
	I1014 13:56:42.249586   25306 main.go:141] libmachine: (ha-450021-m03) Reserved static IP address: 192.168.39.55
	I1014 13:56:42.249623   25306 main.go:141] libmachine: (ha-450021-m03) Waiting for SSH to be available...
	I1014 13:56:42.249632   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Getting to WaitForSSH function...
	I1014 13:56:42.252725   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:42.253185   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021
	I1014 13:56:42.253208   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find defined IP address of network mk-ha-450021 interface with MAC address 52:54:00:af:04:2c
	I1014 13:56:42.253434   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Using SSH client type: external
	I1014 13:56:42.253458   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/id_rsa (-rw-------)
	I1014 13:56:42.253486   25306 main.go:141] libmachine: (ha-450021-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 13:56:42.253504   25306 main.go:141] libmachine: (ha-450021-m03) DBG | About to run SSH command:
	I1014 13:56:42.253518   25306 main.go:141] libmachine: (ha-450021-m03) DBG | exit 0
	I1014 13:56:42.256978   25306 main.go:141] libmachine: (ha-450021-m03) DBG | SSH cmd err, output: exit status 255: 
	I1014 13:56:42.256996   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1014 13:56:42.257003   25306 main.go:141] libmachine: (ha-450021-m03) DBG | command : exit 0
	I1014 13:56:42.257008   25306 main.go:141] libmachine: (ha-450021-m03) DBG | err     : exit status 255
	I1014 13:56:42.257014   25306 main.go:141] libmachine: (ha-450021-m03) DBG | output  : 
	I1014 13:56:45.257522   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Getting to WaitForSSH function...
	I1014 13:56:45.260212   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.260696   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:45.260726   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.260786   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Using SSH client type: external
	I1014 13:56:45.260815   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/id_rsa (-rw-------)
	I1014 13:56:45.260836   25306 main.go:141] libmachine: (ha-450021-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.55 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 13:56:45.260845   25306 main.go:141] libmachine: (ha-450021-m03) DBG | About to run SSH command:
	I1014 13:56:45.260853   25306 main.go:141] libmachine: (ha-450021-m03) DBG | exit 0
	I1014 13:56:45.382585   25306 main.go:141] libmachine: (ha-450021-m03) DBG | SSH cmd err, output: <nil>: 
	I1014 13:56:45.382879   25306 main.go:141] libmachine: (ha-450021-m03) KVM machine creation complete!
	I1014 13:56:45.383199   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetConfigRaw
	I1014 13:56:45.383711   25306 main.go:141] libmachine: (ha-450021-m03) Calling .DriverName
	I1014 13:56:45.383880   25306 main.go:141] libmachine: (ha-450021-m03) Calling .DriverName
	I1014 13:56:45.384004   25306 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1014 13:56:45.384014   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetState
	I1014 13:56:45.385264   25306 main.go:141] libmachine: Detecting operating system of created instance...
	I1014 13:56:45.385276   25306 main.go:141] libmachine: Waiting for SSH to be available...
	I1014 13:56:45.385281   25306 main.go:141] libmachine: Getting to WaitForSSH function...
	I1014 13:56:45.385287   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	I1014 13:56:45.387787   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.388084   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:45.388108   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.388291   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHPort
	I1014 13:56:45.388456   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:45.388593   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:45.388714   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHUsername
	I1014 13:56:45.388830   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:56:45.389029   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I1014 13:56:45.389040   25306 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1014 13:56:45.485735   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 13:56:45.485758   25306 main.go:141] libmachine: Detecting the provisioner...
	I1014 13:56:45.485768   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	I1014 13:56:45.488882   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.489166   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:45.489189   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.489303   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHPort
	I1014 13:56:45.489486   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:45.489610   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:45.489751   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHUsername
	I1014 13:56:45.489875   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:56:45.490046   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I1014 13:56:45.490060   25306 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1014 13:56:45.587324   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1014 13:56:45.587394   25306 main.go:141] libmachine: found compatible host: buildroot
	I1014 13:56:45.587407   25306 main.go:141] libmachine: Provisioning with buildroot...
	I1014 13:56:45.587422   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetMachineName
	I1014 13:56:45.587668   25306 buildroot.go:166] provisioning hostname "ha-450021-m03"
	I1014 13:56:45.587694   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetMachineName
	I1014 13:56:45.587891   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	I1014 13:56:45.589987   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.590329   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:45.590355   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.590484   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHPort
	I1014 13:56:45.590650   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:45.590770   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:45.590887   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHUsername
	I1014 13:56:45.591045   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:56:45.591197   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I1014 13:56:45.591208   25306 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-450021-m03 && echo "ha-450021-m03" | sudo tee /etc/hostname
	I1014 13:56:45.708548   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-450021-m03
	
	I1014 13:56:45.708578   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	I1014 13:56:45.711602   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.711972   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:45.711996   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.712173   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHPort
	I1014 13:56:45.712328   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:45.712490   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:45.712610   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHUsername
	I1014 13:56:45.712744   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:56:45.712915   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I1014 13:56:45.712938   25306 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-450021-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-450021-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-450021-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 13:56:45.819779   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 13:56:45.819813   25306 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 13:56:45.819833   25306 buildroot.go:174] setting up certificates
	I1014 13:56:45.819844   25306 provision.go:84] configureAuth start
	I1014 13:56:45.819857   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetMachineName
	I1014 13:56:45.820154   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetIP
	I1014 13:56:45.823118   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.823460   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:45.823487   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.823678   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	I1014 13:56:45.825593   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.825969   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:45.826000   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.826082   25306 provision.go:143] copyHostCerts
	I1014 13:56:45.826120   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 13:56:45.826162   25306 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 13:56:45.826174   25306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 13:56:45.826256   25306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 13:56:45.826387   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 13:56:45.826414   25306 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 13:56:45.826422   25306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 13:56:45.826470   25306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 13:56:45.826533   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 13:56:45.826559   25306 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 13:56:45.826567   25306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 13:56:45.826616   25306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 13:56:45.826689   25306 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.ha-450021-m03 san=[127.0.0.1 192.168.39.55 ha-450021-m03 localhost minikube]
	I1014 13:56:45.954899   25306 provision.go:177] copyRemoteCerts
	I1014 13:56:45.954971   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 13:56:45.955000   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	I1014 13:56:45.957506   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.957791   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:45.957818   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.957960   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHPort
	I1014 13:56:45.958125   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:45.958305   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHUsername
	I1014 13:56:45.958436   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/id_rsa Username:docker}
	I1014 13:56:46.036842   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 13:56:46.036916   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 13:56:46.062450   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 13:56:46.062515   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1014 13:56:46.086853   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 13:56:46.086926   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 13:56:46.115352   25306 provision.go:87] duration metric: took 295.495227ms to configureAuth
	I1014 13:56:46.115379   25306 buildroot.go:189] setting minikube options for container-runtime
	I1014 13:56:46.115621   25306 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:56:46.115716   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	I1014 13:56:46.118262   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.118631   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:46.118656   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.118842   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHPort
	I1014 13:56:46.119017   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:46.119154   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:46.119286   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHUsername
	I1014 13:56:46.119431   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:56:46.119582   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I1014 13:56:46.119596   25306 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 13:56:46.343295   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 13:56:46.343323   25306 main.go:141] libmachine: Checking connection to Docker...
	I1014 13:56:46.343334   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetURL
	I1014 13:56:46.344763   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Using libvirt version 6000000
	I1014 13:56:46.346964   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.347332   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:46.347354   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.347553   25306 main.go:141] libmachine: Docker is up and running!
	I1014 13:56:46.347568   25306 main.go:141] libmachine: Reticulating splines...
	I1014 13:56:46.347575   25306 client.go:171] duration metric: took 27.031894224s to LocalClient.Create
	I1014 13:56:46.347595   25306 start.go:167] duration metric: took 27.031958272s to libmachine.API.Create "ha-450021"
	I1014 13:56:46.347605   25306 start.go:293] postStartSetup for "ha-450021-m03" (driver="kvm2")
	I1014 13:56:46.347614   25306 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 13:56:46.347629   25306 main.go:141] libmachine: (ha-450021-m03) Calling .DriverName
	I1014 13:56:46.347825   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 13:56:46.347855   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	I1014 13:56:46.350344   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.350734   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:46.350754   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.350907   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHPort
	I1014 13:56:46.351098   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:46.351237   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHUsername
	I1014 13:56:46.351388   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/id_rsa Username:docker}
	I1014 13:56:46.433896   25306 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 13:56:46.438009   25306 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 13:56:46.438030   25306 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 13:56:46.438090   25306 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 13:56:46.438161   25306 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 13:56:46.438171   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> /etc/ssl/certs/150232.pem
	I1014 13:56:46.438246   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 13:56:46.448052   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 13:56:46.472253   25306 start.go:296] duration metric: took 124.635752ms for postStartSetup
	I1014 13:56:46.472307   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetConfigRaw
	I1014 13:56:46.472896   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetIP
	I1014 13:56:46.475688   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.476037   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:46.476063   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.476352   25306 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/config.json ...
	I1014 13:56:46.476544   25306 start.go:128] duration metric: took 27.178917299s to createHost
	I1014 13:56:46.476567   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	I1014 13:56:46.478884   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.479221   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:46.479251   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.479355   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHPort
	I1014 13:56:46.479528   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:46.479638   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:46.479747   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHUsername
	I1014 13:56:46.479874   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:56:46.480025   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I1014 13:56:46.480035   25306 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 13:56:46.583399   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728914206.561472302
	
	I1014 13:56:46.583425   25306 fix.go:216] guest clock: 1728914206.561472302
	I1014 13:56:46.583435   25306 fix.go:229] Guest: 2024-10-14 13:56:46.561472302 +0000 UTC Remote: 2024-10-14 13:56:46.476556325 +0000 UTC m=+146.700269378 (delta=84.915977ms)
	I1014 13:56:46.583455   25306 fix.go:200] guest clock delta is within tolerance: 84.915977ms
	I1014 13:56:46.583460   25306 start.go:83] releasing machines lock for "ha-450021-m03", held for 27.285931106s
	I1014 13:56:46.583477   25306 main.go:141] libmachine: (ha-450021-m03) Calling .DriverName
	I1014 13:56:46.583714   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetIP
	I1014 13:56:46.586281   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.586554   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:46.586578   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.589268   25306 out.go:177] * Found network options:
	I1014 13:56:46.590896   25306 out.go:177]   - NO_PROXY=192.168.39.176,192.168.39.89
	W1014 13:56:46.592325   25306 proxy.go:119] fail to check proxy env: Error ip not in block
	W1014 13:56:46.592354   25306 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 13:56:46.592374   25306 main.go:141] libmachine: (ha-450021-m03) Calling .DriverName
	I1014 13:56:46.592957   25306 main.go:141] libmachine: (ha-450021-m03) Calling .DriverName
	I1014 13:56:46.593143   25306 main.go:141] libmachine: (ha-450021-m03) Calling .DriverName
	I1014 13:56:46.593217   25306 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 13:56:46.593262   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	W1014 13:56:46.593451   25306 proxy.go:119] fail to check proxy env: Error ip not in block
	W1014 13:56:46.593472   25306 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 13:56:46.593517   25306 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 13:56:46.593532   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	I1014 13:56:46.596078   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.596267   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.596474   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:46.596494   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.596667   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHPort
	I1014 13:56:46.596762   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:46.596784   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.596836   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:46.596933   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHPort
	I1014 13:56:46.597000   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHUsername
	I1014 13:56:46.597050   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:46.597134   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/id_rsa Username:docker}
	I1014 13:56:46.597191   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHUsername
	I1014 13:56:46.597299   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/id_rsa Username:docker}
	I1014 13:56:46.829516   25306 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 13:56:46.836362   25306 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 13:56:46.836435   25306 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 13:56:46.855005   25306 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 13:56:46.855034   25306 start.go:495] detecting cgroup driver to use...
	I1014 13:56:46.855101   25306 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 13:56:46.873805   25306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 13:56:46.888317   25306 docker.go:217] disabling cri-docker service (if available) ...
	I1014 13:56:46.888368   25306 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 13:56:46.902770   25306 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 13:56:46.916283   25306 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 13:56:47.031570   25306 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 13:56:47.186900   25306 docker.go:233] disabling docker service ...
	I1014 13:56:47.186971   25306 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 13:56:47.202040   25306 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 13:56:47.215421   25306 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 13:56:47.352807   25306 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 13:56:47.479560   25306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 13:56:47.493558   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 13:56:47.511643   25306 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 13:56:47.511704   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:56:47.521941   25306 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 13:56:47.522055   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:56:47.534488   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:56:47.545529   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:56:47.555346   25306 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 13:56:47.565221   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:56:47.574851   25306 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:56:47.591247   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:56:47.601017   25306 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 13:56:47.610150   25306 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 13:56:47.610208   25306 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 13:56:47.623643   25306 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 13:56:47.632860   25306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:56:47.769053   25306 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 13:56:47.859548   25306 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 13:56:47.859617   25306 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 13:56:47.864769   25306 start.go:563] Will wait 60s for crictl version
	I1014 13:56:47.864838   25306 ssh_runner.go:195] Run: which crictl
	I1014 13:56:47.868622   25306 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 13:56:47.912151   25306 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 13:56:47.912224   25306 ssh_runner.go:195] Run: crio --version
	I1014 13:56:47.943678   25306 ssh_runner.go:195] Run: crio --version
	I1014 13:56:47.974464   25306 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1014 13:56:47.975982   25306 out.go:177]   - env NO_PROXY=192.168.39.176
	I1014 13:56:47.977421   25306 out.go:177]   - env NO_PROXY=192.168.39.176,192.168.39.89
	I1014 13:56:47.978761   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetIP
	I1014 13:56:47.981382   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:47.981851   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:47.981880   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:47.982078   25306 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1014 13:56:47.986330   25306 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 13:56:47.999765   25306 mustload.go:65] Loading cluster: ha-450021
	I1014 13:56:47.999983   25306 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:56:48.000276   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:56:48.000314   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:56:48.015013   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38251
	I1014 13:56:48.015440   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:56:48.015880   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:56:48.015898   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:56:48.016248   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:56:48.016426   25306 main.go:141] libmachine: (ha-450021) Calling .GetState
	I1014 13:56:48.017904   25306 host.go:66] Checking if "ha-450021" exists ...
	I1014 13:56:48.018185   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:56:48.018221   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:56:48.032080   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38143
	I1014 13:56:48.032532   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:56:48.033010   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:56:48.033034   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:56:48.033376   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:56:48.033566   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:56:48.033738   25306 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021 for IP: 192.168.39.55
	I1014 13:56:48.033750   25306 certs.go:194] generating shared ca certs ...
	I1014 13:56:48.033771   25306 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:56:48.033910   25306 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 13:56:48.033951   25306 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 13:56:48.033962   25306 certs.go:256] generating profile certs ...
	I1014 13:56:48.034054   25306 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.key
	I1014 13:56:48.034099   25306 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.b8fc6ee2
	I1014 13:56:48.034119   25306 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.b8fc6ee2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.176 192.168.39.89 192.168.39.55 192.168.39.254]
	I1014 13:56:48.250009   25306 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.b8fc6ee2 ...
	I1014 13:56:48.250065   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.b8fc6ee2: {Name:mk915feb36aa4db7e40387e7070135b42d923437 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:56:48.250246   25306 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.b8fc6ee2 ...
	I1014 13:56:48.250260   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.b8fc6ee2: {Name:mk5df80a68a940fb5e6799020daa8453d1ca5d79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:56:48.250346   25306 certs.go:381] copying /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.b8fc6ee2 -> /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt
	I1014 13:56:48.250480   25306 certs.go:385] copying /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.b8fc6ee2 -> /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key
	I1014 13:56:48.250647   25306 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key
	I1014 13:56:48.250665   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 13:56:48.250682   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 13:56:48.250698   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 13:56:48.250714   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 13:56:48.250729   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 13:56:48.250744   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 13:56:48.250759   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 13:56:48.282713   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 13:56:48.282807   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 13:56:48.282843   25306 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 13:56:48.282853   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 13:56:48.282876   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 13:56:48.282899   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 13:56:48.282919   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 13:56:48.282958   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 13:56:48.282987   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem -> /usr/share/ca-certificates/15023.pem
	I1014 13:56:48.283001   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> /usr/share/ca-certificates/150232.pem
	I1014 13:56:48.283013   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:56:48.283046   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:56:48.285808   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:56:48.286249   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:56:48.286279   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:56:48.286442   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:56:48.286648   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:56:48.286791   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:56:48.286909   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 13:56:48.366887   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1014 13:56:48.372822   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1014 13:56:48.386233   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1014 13:56:48.391254   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1014 13:56:48.402846   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1014 13:56:48.407460   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1014 13:56:48.418138   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1014 13:56:48.423366   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1014 13:56:48.435286   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1014 13:56:48.442980   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1014 13:56:48.457010   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1014 13:56:48.462031   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1014 13:56:48.475327   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 13:56:48.499553   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 13:56:48.526670   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 13:56:48.552105   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 13:56:48.577419   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1014 13:56:48.600650   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 13:56:48.623847   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 13:56:48.649170   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 13:56:48.674110   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 13:56:48.700598   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 13:56:48.725176   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 13:56:48.750067   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1014 13:56:48.767549   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1014 13:56:48.786866   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1014 13:56:48.804737   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1014 13:56:48.822022   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1014 13:56:48.840501   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1014 13:56:48.858556   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1014 13:56:48.875294   25306 ssh_runner.go:195] Run: openssl version
	I1014 13:56:48.880974   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 13:56:48.892080   25306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 13:56:48.896904   25306 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 13:56:48.896954   25306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 13:56:48.902856   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 13:56:48.914212   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 13:56:48.926784   25306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 13:56:48.931725   25306 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 13:56:48.931780   25306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 13:56:48.937633   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 13:56:48.949727   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 13:56:48.960604   25306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:56:48.965337   25306 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:56:48.965398   25306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:56:48.970965   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 13:56:48.983521   25306 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 13:56:48.987988   25306 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 13:56:48.988067   25306 kubeadm.go:934] updating node {m03 192.168.39.55 8443 v1.31.1 crio true true} ...
	I1014 13:56:48.988197   25306 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-450021-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.55
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 13:56:48.988224   25306 kube-vip.go:115] generating kube-vip config ...
	I1014 13:56:48.988260   25306 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1014 13:56:49.006786   25306 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1014 13:56:49.006878   25306 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1014 13:56:49.006948   25306 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 13:56:49.017177   25306 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1014 13:56:49.017231   25306 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1014 13:56:49.027546   25306 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1014 13:56:49.027571   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1014 13:56:49.027572   25306 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1014 13:56:49.027592   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1014 13:56:49.027633   25306 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1014 13:56:49.027546   25306 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1014 13:56:49.027650   25306 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1014 13:56:49.027677   25306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 13:56:49.041850   25306 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1014 13:56:49.041880   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1014 13:56:49.059453   25306 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1014 13:56:49.059469   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1014 13:56:49.059486   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1014 13:56:49.059574   25306 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1014 13:56:49.108836   25306 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1014 13:56:49.108879   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1014 13:56:49.922146   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1014 13:56:49.934057   25306 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1014 13:56:49.951495   25306 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 13:56:49.969831   25306 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1014 13:56:49.987375   25306 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1014 13:56:49.991392   25306 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 13:56:50.004437   25306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:56:50.138457   25306 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 13:56:50.156141   25306 host.go:66] Checking if "ha-450021" exists ...
	I1014 13:56:50.156664   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:56:50.156719   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:56:50.172505   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34963
	I1014 13:56:50.172984   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:56:50.173395   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:56:50.173421   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:56:50.173801   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:56:50.173992   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:56:50.174119   25306 start.go:317] joinCluster: &{Name:ha-450021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 13:56:50.174253   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1014 13:56:50.174270   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:56:50.177090   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:56:50.177620   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:56:50.177652   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:56:50.177788   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:56:50.177965   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:56:50.178111   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:56:50.178264   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 13:56:50.344835   25306 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 13:56:50.344884   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zud3yn.6rxrec6p5rmcwb5b --discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-450021-m03 --control-plane --apiserver-advertise-address=192.168.39.55 --apiserver-bind-port=8443"
	I1014 13:57:13.924825   25306 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zud3yn.6rxrec6p5rmcwb5b --discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-450021-m03 --control-plane --apiserver-advertise-address=192.168.39.55 --apiserver-bind-port=8443": (23.579918283s)
	I1014 13:57:13.924874   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1014 13:57:14.548857   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-450021-m03 minikube.k8s.io/updated_at=2024_10_14T13_57_14_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=ha-450021 minikube.k8s.io/primary=false
	I1014 13:57:14.695478   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-450021-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1014 13:57:14.877781   25306 start.go:319] duration metric: took 24.703657095s to joinCluster
	I1014 13:57:14.877880   25306 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 13:57:14.878165   25306 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:57:14.879747   25306 out.go:177] * Verifying Kubernetes components...
	I1014 13:57:14.881030   25306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:57:15.185770   25306 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 13:57:15.218461   25306 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 13:57:15.218911   25306 kapi.go:59] client config for ha-450021: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.crt", KeyFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.key", CAFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432aa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1014 13:57:15.218986   25306 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.176:8443
	I1014 13:57:15.219237   25306 node_ready.go:35] waiting up to 6m0s for node "ha-450021-m03" to be "Ready" ...
	I1014 13:57:15.219350   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:15.219360   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:15.219373   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:15.219378   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:15.231145   25306 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1014 13:57:15.719481   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:15.719504   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:15.719515   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:15.719523   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:15.723133   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:16.219449   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:16.219474   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:16.219486   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:16.219493   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:16.222753   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:16.719775   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:16.719794   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:16.719801   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:16.719805   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:16.723148   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:17.220337   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:17.220374   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:17.220382   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:17.220385   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:17.223796   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:17.224523   25306 node_ready.go:53] node "ha-450021-m03" has status "Ready":"False"
	I1014 13:57:17.719785   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:17.719812   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:17.719823   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:17.719828   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:17.724599   25306 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 13:57:18.219479   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:18.219497   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:18.219505   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:18.219510   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:18.222903   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:18.719939   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:18.719958   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:18.719964   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:18.719968   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:18.722786   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:57:19.220210   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:19.220235   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:19.220246   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:19.220251   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:19.223890   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:19.719936   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:19.719957   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:19.719965   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:19.719968   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:19.725873   25306 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 13:57:19.726613   25306 node_ready.go:53] node "ha-450021-m03" has status "Ready":"False"
	I1014 13:57:20.219399   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:20.219418   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:20.219426   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:20.219429   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:20.222447   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:20.720283   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:20.720304   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:20.720311   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:20.720316   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:20.723293   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:57:21.219622   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:21.219643   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:21.219651   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:21.219655   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:21.223137   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:21.719413   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:21.719434   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:21.719441   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:21.719445   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:21.727130   25306 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1014 13:57:21.728875   25306 node_ready.go:53] node "ha-450021-m03" has status "Ready":"False"
	I1014 13:57:22.219563   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:22.219584   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:22.219593   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:22.219597   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:22.222980   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:22.719873   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:22.719897   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:22.719906   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:22.719910   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:22.723538   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:23.219424   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:23.219447   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:23.219456   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:23.219459   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:23.223288   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:23.719840   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:23.719863   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:23.719870   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:23.719874   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:23.725306   25306 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 13:57:24.220401   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:24.220427   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:24.220439   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:24.220448   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:24.224025   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:24.224423   25306 node_ready.go:53] node "ha-450021-m03" has status "Ready":"False"
	I1014 13:57:24.720285   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:24.720311   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:24.720323   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:24.720331   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:24.724123   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:25.219820   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:25.219841   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:25.219849   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:25.219852   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:25.223237   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:25.720061   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:25.720081   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:25.720090   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:25.720095   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:25.727909   25306 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1014 13:57:26.220029   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:26.220052   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:26.220060   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:26.220065   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:26.223671   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:26.719549   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:26.719569   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:26.719577   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:26.719581   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:26.724063   25306 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 13:57:26.724628   25306 node_ready.go:53] node "ha-450021-m03" has status "Ready":"False"
	I1014 13:57:27.220196   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:27.220218   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:27.220230   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:27.220239   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:27.227906   25306 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1014 13:57:27.719535   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:27.719576   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:27.719587   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:27.719592   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:27.727292   25306 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1014 13:57:28.219952   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:28.219973   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:28.219983   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:28.219988   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:28.223688   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:28.719432   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:28.719455   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:28.719463   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:28.719468   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:28.722896   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:29.219877   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:29.219901   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.219911   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.219915   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.223129   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:29.223965   25306 node_ready.go:49] node "ha-450021-m03" has status "Ready":"True"
	I1014 13:57:29.223987   25306 node_ready.go:38] duration metric: took 14.004731761s for node "ha-450021-m03" to be "Ready" ...
	I1014 13:57:29.223998   25306 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 13:57:29.224060   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods
	I1014 13:57:29.224068   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.224075   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.224081   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.230054   25306 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 13:57:29.238333   25306 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-btfml" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.238422   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-btfml
	I1014 13:57:29.238435   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.238446   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.238455   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.242284   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:29.243174   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:29.243194   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.243204   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.243210   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.245933   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:57:29.246411   25306 pod_ready.go:93] pod "coredns-7c65d6cfc9-btfml" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:29.246431   25306 pod_ready.go:82] duration metric: took 8.073653ms for pod "coredns-7c65d6cfc9-btfml" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.246440   25306 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-h5s6h" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.246494   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-h5s6h
	I1014 13:57:29.246505   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.246515   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.246521   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.248883   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:57:29.249550   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:29.249563   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.249569   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.249573   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.251738   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:57:29.252240   25306 pod_ready.go:93] pod "coredns-7c65d6cfc9-h5s6h" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:29.252260   25306 pod_ready.go:82] duration metric: took 5.813932ms for pod "coredns-7c65d6cfc9-h5s6h" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.252268   25306 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.252312   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/etcd-ha-450021
	I1014 13:57:29.252319   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.252326   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.252330   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.254629   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:57:29.255222   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:29.255236   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.255243   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.255248   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.257432   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:57:29.257842   25306 pod_ready.go:93] pod "etcd-ha-450021" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:29.257858   25306 pod_ready.go:82] duration metric: took 5.5841ms for pod "etcd-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.257865   25306 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.257906   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/etcd-ha-450021-m02
	I1014 13:57:29.257913   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.257920   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.257926   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.260016   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:57:29.260730   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:57:29.260748   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.260759   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.260766   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.262822   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:57:29.263416   25306 pod_ready.go:93] pod "etcd-ha-450021-m02" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:29.263434   25306 pod_ready.go:82] duration metric: took 5.562613ms for pod "etcd-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.263445   25306 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-450021-m03" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.420814   25306 request.go:632] Waited for 157.302029ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/etcd-ha-450021-m03
	I1014 13:57:29.420888   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/etcd-ha-450021-m03
	I1014 13:57:29.420896   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.420904   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.420911   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.423933   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:29.620244   25306 request.go:632] Waited for 195.721406ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:29.620303   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:29.620309   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.620331   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.620359   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.623721   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:29.624232   25306 pod_ready.go:93] pod "etcd-ha-450021-m03" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:29.624248   25306 pod_ready.go:82] duration metric: took 360.793531ms for pod "etcd-ha-450021-m03" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.624265   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.820803   25306 request.go:632] Waited for 196.4673ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450021
	I1014 13:57:29.820871   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450021
	I1014 13:57:29.820878   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.820888   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.820899   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.825055   25306 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 13:57:30.020658   25306 request.go:632] Waited for 194.868544ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:30.020728   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:30.020733   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:30.020740   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:30.020744   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:30.024136   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:30.024766   25306 pod_ready.go:93] pod "kube-apiserver-ha-450021" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:30.024782   25306 pod_ready.go:82] duration metric: took 400.510119ms for pod "kube-apiserver-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:30.024791   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:30.220429   25306 request.go:632] Waited for 195.542568ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450021-m02
	I1014 13:57:30.220491   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450021-m02
	I1014 13:57:30.220497   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:30.220508   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:30.220517   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:30.224059   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:30.420172   25306 request.go:632] Waited for 195.340177ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:57:30.420225   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:57:30.420231   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:30.420238   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:30.420243   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:30.423967   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:30.424613   25306 pod_ready.go:93] pod "kube-apiserver-ha-450021-m02" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:30.424631   25306 pod_ready.go:82] duration metric: took 399.833776ms for pod "kube-apiserver-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:30.424640   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-450021-m03" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:30.620846   25306 request.go:632] Waited for 196.141352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450021-m03
	I1014 13:57:30.620922   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450021-m03
	I1014 13:57:30.620928   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:30.620935   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:30.620942   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:30.624496   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:30.820849   25306 request.go:632] Waited for 195.396807ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:30.820939   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:30.820975   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:30.820988   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:30.820995   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:30.824502   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:30.825021   25306 pod_ready.go:93] pod "kube-apiserver-ha-450021-m03" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:30.825046   25306 pod_ready.go:82] duration metric: took 400.398723ms for pod "kube-apiserver-ha-450021-m03" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:30.825059   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:31.020285   25306 request.go:632] Waited for 195.157008ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450021
	I1014 13:57:31.020365   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450021
	I1014 13:57:31.020370   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:31.020385   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:31.020393   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:31.024268   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:31.220585   25306 request.go:632] Waited for 195.341359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:31.220643   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:31.220650   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:31.220659   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:31.220664   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:31.224268   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:31.224942   25306 pod_ready.go:93] pod "kube-controller-manager-ha-450021" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:31.224972   25306 pod_ready.go:82] duration metric: took 399.90441ms for pod "kube-controller-manager-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:31.224991   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:31.419861   25306 request.go:632] Waited for 194.791136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450021-m02
	I1014 13:57:31.419920   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450021-m02
	I1014 13:57:31.419926   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:31.419934   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:31.419939   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:31.423671   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:31.620170   25306 request.go:632] Waited for 195.363598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:57:31.620257   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:57:31.620267   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:31.620279   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:31.620289   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:31.623838   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:31.624806   25306 pod_ready.go:93] pod "kube-controller-manager-ha-450021-m02" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:31.624830   25306 pod_ready.go:82] duration metric: took 399.825307ms for pod "kube-controller-manager-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:31.624845   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-450021-m03" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:31.819925   25306 request.go:632] Waited for 194.986166ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450021-m03
	I1014 13:57:31.819986   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450021-m03
	I1014 13:57:31.819995   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:31.820007   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:31.820020   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:31.823660   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:32.020870   25306 request.go:632] Waited for 196.217554ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:32.020953   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:32.020964   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:32.020976   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:32.020984   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:32.024484   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:32.025120   25306 pod_ready.go:93] pod "kube-controller-manager-ha-450021-m03" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:32.025154   25306 pod_ready.go:82] duration metric: took 400.297134ms for pod "kube-controller-manager-ha-450021-m03" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:32.025174   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9tbfp" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:32.220154   25306 request.go:632] Waited for 194.89867ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9tbfp
	I1014 13:57:32.220222   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9tbfp
	I1014 13:57:32.220229   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:32.220239   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:32.220246   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:32.223571   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:32.420701   25306 request.go:632] Waited for 196.352524ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:32.420758   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:32.420763   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:32.420770   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:32.420774   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:32.424213   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:32.424900   25306 pod_ready.go:93] pod "kube-proxy-9tbfp" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:32.424923   25306 pod_ready.go:82] duration metric: took 399.74019ms for pod "kube-proxy-9tbfp" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:32.424936   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dmbpv" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:32.619849   25306 request.go:632] Waited for 194.848954ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dmbpv
	I1014 13:57:32.619902   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dmbpv
	I1014 13:57:32.619908   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:32.619915   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:32.619918   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:32.623593   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:32.820780   25306 request.go:632] Waited for 196.366155ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:32.820849   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:32.820854   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:32.820863   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:32.820870   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:32.824510   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:32.825180   25306 pod_ready.go:93] pod "kube-proxy-dmbpv" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:32.825196   25306 pod_ready.go:82] duration metric: took 400.2529ms for pod "kube-proxy-dmbpv" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:32.825205   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-v24tf" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:33.020309   25306 request.go:632] Waited for 195.030338ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v24tf
	I1014 13:57:33.020398   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v24tf
	I1014 13:57:33.020409   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:33.020421   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:33.020429   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:33.023944   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:33.220873   25306 request.go:632] Waited for 196.168894ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:57:33.220972   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:57:33.220984   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:33.221002   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:33.221010   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:33.224398   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:33.225139   25306 pod_ready.go:93] pod "kube-proxy-v24tf" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:33.225161   25306 pod_ready.go:82] duration metric: took 399.9482ms for pod "kube-proxy-v24tf" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:33.225174   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:33.420278   25306 request.go:632] Waited for 195.028059ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450021
	I1014 13:57:33.420352   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450021
	I1014 13:57:33.420358   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:33.420365   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:33.420370   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:33.423970   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:33.619940   25306 request.go:632] Waited for 195.292135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:33.620017   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:33.620024   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:33.620031   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:33.620038   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:33.623628   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:33.624429   25306 pod_ready.go:93] pod "kube-scheduler-ha-450021" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:33.624446   25306 pod_ready.go:82] duration metric: took 399.265054ms for pod "kube-scheduler-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:33.624456   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:33.820766   25306 request.go:632] Waited for 196.250065ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450021-m02
	I1014 13:57:33.820834   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450021-m02
	I1014 13:57:33.820840   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:33.820847   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:33.820861   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:33.824813   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:34.020844   25306 request.go:632] Waited for 195.391993ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:57:34.020901   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:57:34.020908   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:34.020915   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:34.020920   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:34.025139   25306 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 13:57:34.026105   25306 pod_ready.go:93] pod "kube-scheduler-ha-450021-m02" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:34.026127   25306 pod_ready.go:82] duration metric: took 401.663759ms for pod "kube-scheduler-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:34.026140   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-450021-m03" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:34.220315   25306 request.go:632] Waited for 194.095801ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450021-m03
	I1014 13:57:34.220368   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450021-m03
	I1014 13:57:34.220374   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:34.220381   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:34.220385   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:34.224012   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:34.420204   25306 request.go:632] Waited for 195.373756ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:34.420275   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:34.420280   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:34.420288   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:34.420292   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:34.424022   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:34.424779   25306 pod_ready.go:93] pod "kube-scheduler-ha-450021-m03" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:34.424801   25306 pod_ready.go:82] duration metric: took 398.654013ms for pod "kube-scheduler-ha-450021-m03" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:34.424816   25306 pod_ready.go:39] duration metric: took 5.200801864s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 13:57:34.424833   25306 api_server.go:52] waiting for apiserver process to appear ...
	I1014 13:57:34.424888   25306 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 13:57:34.443450   25306 api_server.go:72] duration metric: took 19.56551851s to wait for apiserver process to appear ...
	I1014 13:57:34.443480   25306 api_server.go:88] waiting for apiserver healthz status ...
	I1014 13:57:34.443507   25306 api_server.go:253] Checking apiserver healthz at https://192.168.39.176:8443/healthz ...
	I1014 13:57:34.447984   25306 api_server.go:279] https://192.168.39.176:8443/healthz returned 200:
	ok
	I1014 13:57:34.448076   25306 round_trippers.go:463] GET https://192.168.39.176:8443/version
	I1014 13:57:34.448089   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:34.448100   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:34.448108   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:34.449007   25306 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1014 13:57:34.449084   25306 api_server.go:141] control plane version: v1.31.1
	I1014 13:57:34.449104   25306 api_server.go:131] duration metric: took 5.616812ms to wait for apiserver health ...
	I1014 13:57:34.449115   25306 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 13:57:34.620303   25306 request.go:632] Waited for 171.103547ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods
	I1014 13:57:34.620363   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods
	I1014 13:57:34.620370   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:34.620380   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:34.620385   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:34.626531   25306 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 13:57:34.632849   25306 system_pods.go:59] 24 kube-system pods found
	I1014 13:57:34.632878   25306 system_pods.go:61] "coredns-7c65d6cfc9-btfml" [292e08ef-5eec-4ebb-acf5-5b4b03e47724] Running
	I1014 13:57:34.632883   25306 system_pods.go:61] "coredns-7c65d6cfc9-h5s6h" [bf78614c-8f22-48f9-8a16-cfcffecadfcc] Running
	I1014 13:57:34.632887   25306 system_pods.go:61] "etcd-ha-450021" [d3e4a252-6d4a-4617-99f8-416ddaa8e695] Running
	I1014 13:57:34.632891   25306 system_pods.go:61] "etcd-ha-450021-m02" [d890c5b4-c756-42a4-a549-59b46d9fa0f6] Running
	I1014 13:57:34.632894   25306 system_pods.go:61] "etcd-ha-450021-m03" [ceded083-0662-41fd-9317-3f7debf0252b] Running
	I1014 13:57:34.632897   25306 system_pods.go:61] "kindnet-2ghzc" [f725a811-6a0e-433c-913d-079b7bc4742f] Running
	I1014 13:57:34.632900   25306 system_pods.go:61] "kindnet-7jwgx" [c4607bd9-32b8-401b-a74e-b20d6f63ce03] Running
	I1014 13:57:34.632903   25306 system_pods.go:61] "kindnet-c2xkn" [0f821123-80f9-4fe5-b64c-fb641ec185ea] Running
	I1014 13:57:34.632906   25306 system_pods.go:61] "kube-apiserver-ha-450021" [3c355a29-9ac5-466a-974f-22fc58429b98] Running
	I1014 13:57:34.632909   25306 system_pods.go:61] "kube-apiserver-ha-450021-m02" [5e9f016e-2b42-4301-964f-8e2af49d0d08] Running
	I1014 13:57:34.632911   25306 system_pods.go:61] "kube-apiserver-ha-450021-m03" [3521d4f5-b657-4f3c-a36e-a855d81590e9] Running
	I1014 13:57:34.632915   25306 system_pods.go:61] "kube-controller-manager-ha-450021" [b002ddcb-0bb2-44f5-a779-20df99f3cab5] Running
	I1014 13:57:34.632917   25306 system_pods.go:61] "kube-controller-manager-ha-450021-m02" [f7be35b1-380c-4f40-a1d6-5522b961917c] Running
	I1014 13:57:34.632920   25306 system_pods.go:61] "kube-controller-manager-ha-450021-m03" [56960cdf-61e7-4251-8fa5-7034b7aeffcd] Running
	I1014 13:57:34.632923   25306 system_pods.go:61] "kube-proxy-9tbfp" [fc30758d-16af-4818-9414-e78ee865fb7d] Running
	I1014 13:57:34.632926   25306 system_pods.go:61] "kube-proxy-dmbpv" [e09737a1-c663-4951-b6cb-c0690fbd8153] Running
	I1014 13:57:34.632929   25306 system_pods.go:61] "kube-proxy-v24tf" [49b626fc-4017-45f7-a44f-43f3b311d0e0] Running
	I1014 13:57:34.632931   25306 system_pods.go:61] "kube-scheduler-ha-450021" [2f216272-b604-4f1c-ad4b-fdb874a78cf4] Running
	I1014 13:57:34.632934   25306 system_pods.go:61] "kube-scheduler-ha-450021-m02" [cfa4bb4e-6a32-4b4b-85df-2c7b1a356a4a] Running
	I1014 13:57:34.632937   25306 system_pods.go:61] "kube-scheduler-ha-450021-m03" [11cfe784-95d9-48fb-ab0c-334d4136c207] Running
	I1014 13:57:34.632940   25306 system_pods.go:61] "kube-vip-ha-450021" [e5340482-7ea5-4299-8096-a2f292c4bfdd] Running
	I1014 13:57:34.632942   25306 system_pods.go:61] "kube-vip-ha-450021-m02" [6a409d8d-9566-4caa-af5a-0dbe7b9f6cec] Running
	I1014 13:57:34.632946   25306 system_pods.go:61] "kube-vip-ha-450021-m03" [de6e64e3-5d83-4ca7-8618-279cca6bf0c1] Running
	I1014 13:57:34.632948   25306 system_pods.go:61] "storage-provisioner" [1377adb3-3faf-4dee-a86e-9c4544a02d51] Running
	I1014 13:57:34.632953   25306 system_pods.go:74] duration metric: took 183.830824ms to wait for pod list to return data ...
	I1014 13:57:34.632963   25306 default_sa.go:34] waiting for default service account to be created ...
	I1014 13:57:34.820472   25306 request.go:632] Waited for 187.441614ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/default/serviceaccounts
	I1014 13:57:34.820540   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/default/serviceaccounts
	I1014 13:57:34.820546   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:34.820553   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:34.820563   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:34.824880   25306 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 13:57:34.824982   25306 default_sa.go:45] found service account: "default"
	I1014 13:57:34.824994   25306 default_sa.go:55] duration metric: took 192.026288ms for default service account to be created ...
	I1014 13:57:34.825002   25306 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 13:57:35.020105   25306 request.go:632] Waited for 195.031126ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods
	I1014 13:57:35.020178   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods
	I1014 13:57:35.020187   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:35.020199   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:35.020209   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:35.026365   25306 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 13:57:35.032685   25306 system_pods.go:86] 24 kube-system pods found
	I1014 13:57:35.032713   25306 system_pods.go:89] "coredns-7c65d6cfc9-btfml" [292e08ef-5eec-4ebb-acf5-5b4b03e47724] Running
	I1014 13:57:35.032719   25306 system_pods.go:89] "coredns-7c65d6cfc9-h5s6h" [bf78614c-8f22-48f9-8a16-cfcffecadfcc] Running
	I1014 13:57:35.032722   25306 system_pods.go:89] "etcd-ha-450021" [d3e4a252-6d4a-4617-99f8-416ddaa8e695] Running
	I1014 13:57:35.032727   25306 system_pods.go:89] "etcd-ha-450021-m02" [d890c5b4-c756-42a4-a549-59b46d9fa0f6] Running
	I1014 13:57:35.032731   25306 system_pods.go:89] "etcd-ha-450021-m03" [ceded083-0662-41fd-9317-3f7debf0252b] Running
	I1014 13:57:35.032736   25306 system_pods.go:89] "kindnet-2ghzc" [f725a811-6a0e-433c-913d-079b7bc4742f] Running
	I1014 13:57:35.032739   25306 system_pods.go:89] "kindnet-7jwgx" [c4607bd9-32b8-401b-a74e-b20d6f63ce03] Running
	I1014 13:57:35.032743   25306 system_pods.go:89] "kindnet-c2xkn" [0f821123-80f9-4fe5-b64c-fb641ec185ea] Running
	I1014 13:57:35.032747   25306 system_pods.go:89] "kube-apiserver-ha-450021" [3c355a29-9ac5-466a-974f-22fc58429b98] Running
	I1014 13:57:35.032751   25306 system_pods.go:89] "kube-apiserver-ha-450021-m02" [5e9f016e-2b42-4301-964f-8e2af49d0d08] Running
	I1014 13:57:35.032754   25306 system_pods.go:89] "kube-apiserver-ha-450021-m03" [3521d4f5-b657-4f3c-a36e-a855d81590e9] Running
	I1014 13:57:35.032758   25306 system_pods.go:89] "kube-controller-manager-ha-450021" [b002ddcb-0bb2-44f5-a779-20df99f3cab5] Running
	I1014 13:57:35.032763   25306 system_pods.go:89] "kube-controller-manager-ha-450021-m02" [f7be35b1-380c-4f40-a1d6-5522b961917c] Running
	I1014 13:57:35.032770   25306 system_pods.go:89] "kube-controller-manager-ha-450021-m03" [56960cdf-61e7-4251-8fa5-7034b7aeffcd] Running
	I1014 13:57:35.032774   25306 system_pods.go:89] "kube-proxy-9tbfp" [fc30758d-16af-4818-9414-e78ee865fb7d] Running
	I1014 13:57:35.032780   25306 system_pods.go:89] "kube-proxy-dmbpv" [e09737a1-c663-4951-b6cb-c0690fbd8153] Running
	I1014 13:57:35.032783   25306 system_pods.go:89] "kube-proxy-v24tf" [49b626fc-4017-45f7-a44f-43f3b311d0e0] Running
	I1014 13:57:35.032789   25306 system_pods.go:89] "kube-scheduler-ha-450021" [2f216272-b604-4f1c-ad4b-fdb874a78cf4] Running
	I1014 13:57:35.032793   25306 system_pods.go:89] "kube-scheduler-ha-450021-m02" [cfa4bb4e-6a32-4b4b-85df-2c7b1a356a4a] Running
	I1014 13:57:35.032799   25306 system_pods.go:89] "kube-scheduler-ha-450021-m03" [11cfe784-95d9-48fb-ab0c-334d4136c207] Running
	I1014 13:57:35.032803   25306 system_pods.go:89] "kube-vip-ha-450021" [e5340482-7ea5-4299-8096-a2f292c4bfdd] Running
	I1014 13:57:35.032808   25306 system_pods.go:89] "kube-vip-ha-450021-m02" [6a409d8d-9566-4caa-af5a-0dbe7b9f6cec] Running
	I1014 13:57:35.032811   25306 system_pods.go:89] "kube-vip-ha-450021-m03" [de6e64e3-5d83-4ca7-8618-279cca6bf0c1] Running
	I1014 13:57:35.032816   25306 system_pods.go:89] "storage-provisioner" [1377adb3-3faf-4dee-a86e-9c4544a02d51] Running
	I1014 13:57:35.032822   25306 system_pods.go:126] duration metric: took 207.815391ms to wait for k8s-apps to be running ...
	I1014 13:57:35.032831   25306 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 13:57:35.032872   25306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 13:57:35.048661   25306 system_svc.go:56] duration metric: took 15.819815ms WaitForService to wait for kubelet
	I1014 13:57:35.048694   25306 kubeadm.go:582] duration metric: took 20.170783435s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 13:57:35.048713   25306 node_conditions.go:102] verifying NodePressure condition ...
	I1014 13:57:35.220270   25306 request.go:632] Waited for 171.481631ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes
	I1014 13:57:35.220338   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes
	I1014 13:57:35.220343   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:35.220351   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:35.220356   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:35.224271   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:35.225220   25306 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 13:57:35.225243   25306 node_conditions.go:123] node cpu capacity is 2
	I1014 13:57:35.225255   25306 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 13:57:35.225258   25306 node_conditions.go:123] node cpu capacity is 2
	I1014 13:57:35.225264   25306 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 13:57:35.225268   25306 node_conditions.go:123] node cpu capacity is 2
	I1014 13:57:35.225272   25306 node_conditions.go:105] duration metric: took 176.55497ms to run NodePressure ...
	I1014 13:57:35.225286   25306 start.go:241] waiting for startup goroutines ...
	I1014 13:57:35.225306   25306 start.go:255] writing updated cluster config ...
	I1014 13:57:35.225629   25306 ssh_runner.go:195] Run: rm -f paused
	I1014 13:57:35.278941   25306 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1014 13:57:35.281235   25306 out.go:177] * Done! kubectl is now configured to use "ha-450021" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 14 14:01:19 ha-450021 crio[655]: time="2024-10-14 14:01:19.433495652Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914479433476279,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=292e95fe-b0fc-46d0-9e6f-a62ffd4db9a0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 14:01:19 ha-450021 crio[655]: time="2024-10-14 14:01:19.434033919Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3a1b9f56-f60e-48e4-82da-dff90dde9376 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:19 ha-450021 crio[655]: time="2024-10-14 14:01:19.434103760Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3a1b9f56-f60e-48e4-82da-dff90dde9376 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:19 ha-450021 crio[655]: time="2024-10-14 14:01:19.434353020Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a41053c31fcb74ad24a4417c885436510a42c2e477d721651ae65459748bfd17,PodSandboxId:c3201918bd10d1535ddb2ebef0aa3b55e3e997e18a90de29ee09c2a7cb289b47,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728914259057513833,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fkz82,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07dccd61-4a5a-4d82-ba70-df7e6ff6bb4c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1051cfacf1c9fba1500a3437ece4de024c0fac626340151d2e28cbc18dc67a85,PodSandboxId:49d4b2387dd65dbd67bcdc3c377ba15e05400c782a4e2980358881a9c87ca5f3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728914119581188349,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1377adb3-3faf-4dee-a86e-9c4544a02d51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b17b6d38f935951dfa1746d02ec45095af8e06f6258ed80913feba7a10224927,PodSandboxId:b83407d74496b7f16cdeead48267cc803ffacd743feae034b1233a8c93800582,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728914119554752984,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-btfml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292e08ef-5eec-4ebb-acf5-5b4b03e47724,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:138a0b23a09075071550a4b7808439fd0baef4054fc6a7a7d4e8bc9a4457abfe,PodSandboxId:e862ae5ec13c39ac9605ac5725a1018466957149e1a69b2e013f7a87d5095bee,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728914119562072468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h5s6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf78614c-8f
22-48f9-8a16-cfcffecadfcc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15af89d835eebb58d825b5cdfdcbcfc064fe27d95caa6667adfb0e714974996,PodSandboxId:10ad22ab64de39acac4028e06deccb0ee0084112ba58c2349599913bf0d931d6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728914107455260455,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-c2xkn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f821123-80f9-4fe5-b64c-fb641ec185ea,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eec863af38c114b5058f678da27f8ce8608a5cd97566d4e704e07ff87100124,PodSandboxId:40a3318e89ae5bc2fe2d145b32f19e419934ba96586add9c17a653799fad9d26,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172891410
4698984942,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmbpv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e09737a1-c663-4951-b6cb-c0690fbd8153,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69f6cdf690df6514a349ce87c438a718209e9a098486e719653e5ac84d645899,PodSandboxId:dcc284c053db656af8f5da1c1a80672bfee0353e44ea6e4a01814f37351dad87,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17289140950
79963768,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67c899a1266c35ae5a8a71fac8e2760,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4efae268f9ec331abbf180a9264d60144b2a22485b89d39a46207f1c40454221,PodSandboxId:ce558cb07ca8f68689235cad5912b7da5a8f1c75775d2e5f2e7e823fe5127da9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728914093274186361,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d575d608bbdadce4a654f35576809ec,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fbfff3b334bde93db2f81855492434f8be70767826f2e33734ab52ad522a7a,PodSandboxId:ee3335073bb66b262b3eabf6a735be75c2ddcef2fa54aff9245585e26dd713f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728914093280862312,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca49fb553a9c26ea8ae634afb933e7b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ebec97dfd405a7e2c8ad77d0255ca029054cfb1090eba8d4d3851bdb68213e1,PodSandboxId:bc7fe679de4dc3fdff7f7e05bcd59ce354148a5c261197612bf284921530e902,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728914093233135044,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d8c37c1aa9e38ec5865c9c3159f1b5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942c179e591a9c0a8a1d869cfc5456dcbfb37c78056f256b241c51aab8936a3e,PodSandboxId:efaae5865d8afa77d2901173ba9c38ea901ca40f040d82cc15e889b37ff5a83c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728914093143514748,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c293b9606d38e94bf353b2714c2a069,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3a1b9f56-f60e-48e4-82da-dff90dde9376 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:19 ha-450021 crio[655]: time="2024-10-14 14:01:19.480514137Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0d77d36c-50a4-4126-8976-261ff92b92b0 name=/runtime.v1.RuntimeService/Version
	Oct 14 14:01:19 ha-450021 crio[655]: time="2024-10-14 14:01:19.480661251Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0d77d36c-50a4-4126-8976-261ff92b92b0 name=/runtime.v1.RuntimeService/Version
	Oct 14 14:01:19 ha-450021 crio[655]: time="2024-10-14 14:01:19.482877298Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3523c6dc-0922-4df7-ad7d-680e5e0c7727 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 14:01:19 ha-450021 crio[655]: time="2024-10-14 14:01:19.483887082Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914479483857754,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3523c6dc-0922-4df7-ad7d-680e5e0c7727 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 14:01:19 ha-450021 crio[655]: time="2024-10-14 14:01:19.484781910Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a6c74b6c-0240-45f6-a6ac-37c8fa5d193d name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:19 ha-450021 crio[655]: time="2024-10-14 14:01:19.484844284Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a6c74b6c-0240-45f6-a6ac-37c8fa5d193d name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:19 ha-450021 crio[655]: time="2024-10-14 14:01:19.485067499Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a41053c31fcb74ad24a4417c885436510a42c2e477d721651ae65459748bfd17,PodSandboxId:c3201918bd10d1535ddb2ebef0aa3b55e3e997e18a90de29ee09c2a7cb289b47,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728914259057513833,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fkz82,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07dccd61-4a5a-4d82-ba70-df7e6ff6bb4c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1051cfacf1c9fba1500a3437ece4de024c0fac626340151d2e28cbc18dc67a85,PodSandboxId:49d4b2387dd65dbd67bcdc3c377ba15e05400c782a4e2980358881a9c87ca5f3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728914119581188349,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1377adb3-3faf-4dee-a86e-9c4544a02d51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b17b6d38f935951dfa1746d02ec45095af8e06f6258ed80913feba7a10224927,PodSandboxId:b83407d74496b7f16cdeead48267cc803ffacd743feae034b1233a8c93800582,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728914119554752984,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-btfml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292e08ef-5eec-4ebb-acf5-5b4b03e47724,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:138a0b23a09075071550a4b7808439fd0baef4054fc6a7a7d4e8bc9a4457abfe,PodSandboxId:e862ae5ec13c39ac9605ac5725a1018466957149e1a69b2e013f7a87d5095bee,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728914119562072468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h5s6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf78614c-8f
22-48f9-8a16-cfcffecadfcc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15af89d835eebb58d825b5cdfdcbcfc064fe27d95caa6667adfb0e714974996,PodSandboxId:10ad22ab64de39acac4028e06deccb0ee0084112ba58c2349599913bf0d931d6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728914107455260455,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-c2xkn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f821123-80f9-4fe5-b64c-fb641ec185ea,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eec863af38c114b5058f678da27f8ce8608a5cd97566d4e704e07ff87100124,PodSandboxId:40a3318e89ae5bc2fe2d145b32f19e419934ba96586add9c17a653799fad9d26,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172891410
4698984942,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmbpv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e09737a1-c663-4951-b6cb-c0690fbd8153,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69f6cdf690df6514a349ce87c438a718209e9a098486e719653e5ac84d645899,PodSandboxId:dcc284c053db656af8f5da1c1a80672bfee0353e44ea6e4a01814f37351dad87,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17289140950
79963768,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67c899a1266c35ae5a8a71fac8e2760,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4efae268f9ec331abbf180a9264d60144b2a22485b89d39a46207f1c40454221,PodSandboxId:ce558cb07ca8f68689235cad5912b7da5a8f1c75775d2e5f2e7e823fe5127da9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728914093274186361,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d575d608bbdadce4a654f35576809ec,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fbfff3b334bde93db2f81855492434f8be70767826f2e33734ab52ad522a7a,PodSandboxId:ee3335073bb66b262b3eabf6a735be75c2ddcef2fa54aff9245585e26dd713f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728914093280862312,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca49fb553a9c26ea8ae634afb933e7b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ebec97dfd405a7e2c8ad77d0255ca029054cfb1090eba8d4d3851bdb68213e1,PodSandboxId:bc7fe679de4dc3fdff7f7e05bcd59ce354148a5c261197612bf284921530e902,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728914093233135044,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d8c37c1aa9e38ec5865c9c3159f1b5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942c179e591a9c0a8a1d869cfc5456dcbfb37c78056f256b241c51aab8936a3e,PodSandboxId:efaae5865d8afa77d2901173ba9c38ea901ca40f040d82cc15e889b37ff5a83c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728914093143514748,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c293b9606d38e94bf353b2714c2a069,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a6c74b6c-0240-45f6-a6ac-37c8fa5d193d name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:19 ha-450021 crio[655]: time="2024-10-14 14:01:19.527736981Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ef444e70-30e7-4931-8510-40f988bc505e name=/runtime.v1.RuntimeService/Version
	Oct 14 14:01:19 ha-450021 crio[655]: time="2024-10-14 14:01:19.527812655Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ef444e70-30e7-4931-8510-40f988bc505e name=/runtime.v1.RuntimeService/Version
	Oct 14 14:01:19 ha-450021 crio[655]: time="2024-10-14 14:01:19.529226227Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ea49827a-1d7f-499f-9bae-3495b3f7f8be name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 14:01:19 ha-450021 crio[655]: time="2024-10-14 14:01:19.529901738Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914479529875766,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ea49827a-1d7f-499f-9bae-3495b3f7f8be name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 14:01:19 ha-450021 crio[655]: time="2024-10-14 14:01:19.530628620Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=22623d89-94e9-4197-9b0a-68263bcf8d31 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:19 ha-450021 crio[655]: time="2024-10-14 14:01:19.530685063Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=22623d89-94e9-4197-9b0a-68263bcf8d31 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:19 ha-450021 crio[655]: time="2024-10-14 14:01:19.530911576Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a41053c31fcb74ad24a4417c885436510a42c2e477d721651ae65459748bfd17,PodSandboxId:c3201918bd10d1535ddb2ebef0aa3b55e3e997e18a90de29ee09c2a7cb289b47,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728914259057513833,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fkz82,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07dccd61-4a5a-4d82-ba70-df7e6ff6bb4c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1051cfacf1c9fba1500a3437ece4de024c0fac626340151d2e28cbc18dc67a85,PodSandboxId:49d4b2387dd65dbd67bcdc3c377ba15e05400c782a4e2980358881a9c87ca5f3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728914119581188349,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1377adb3-3faf-4dee-a86e-9c4544a02d51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b17b6d38f935951dfa1746d02ec45095af8e06f6258ed80913feba7a10224927,PodSandboxId:b83407d74496b7f16cdeead48267cc803ffacd743feae034b1233a8c93800582,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728914119554752984,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-btfml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292e08ef-5eec-4ebb-acf5-5b4b03e47724,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:138a0b23a09075071550a4b7808439fd0baef4054fc6a7a7d4e8bc9a4457abfe,PodSandboxId:e862ae5ec13c39ac9605ac5725a1018466957149e1a69b2e013f7a87d5095bee,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728914119562072468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h5s6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf78614c-8f
22-48f9-8a16-cfcffecadfcc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15af89d835eebb58d825b5cdfdcbcfc064fe27d95caa6667adfb0e714974996,PodSandboxId:10ad22ab64de39acac4028e06deccb0ee0084112ba58c2349599913bf0d931d6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728914107455260455,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-c2xkn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f821123-80f9-4fe5-b64c-fb641ec185ea,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eec863af38c114b5058f678da27f8ce8608a5cd97566d4e704e07ff87100124,PodSandboxId:40a3318e89ae5bc2fe2d145b32f19e419934ba96586add9c17a653799fad9d26,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172891410
4698984942,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmbpv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e09737a1-c663-4951-b6cb-c0690fbd8153,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69f6cdf690df6514a349ce87c438a718209e9a098486e719653e5ac84d645899,PodSandboxId:dcc284c053db656af8f5da1c1a80672bfee0353e44ea6e4a01814f37351dad87,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17289140950
79963768,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67c899a1266c35ae5a8a71fac8e2760,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4efae268f9ec331abbf180a9264d60144b2a22485b89d39a46207f1c40454221,PodSandboxId:ce558cb07ca8f68689235cad5912b7da5a8f1c75775d2e5f2e7e823fe5127da9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728914093274186361,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d575d608bbdadce4a654f35576809ec,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fbfff3b334bde93db2f81855492434f8be70767826f2e33734ab52ad522a7a,PodSandboxId:ee3335073bb66b262b3eabf6a735be75c2ddcef2fa54aff9245585e26dd713f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728914093280862312,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca49fb553a9c26ea8ae634afb933e7b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ebec97dfd405a7e2c8ad77d0255ca029054cfb1090eba8d4d3851bdb68213e1,PodSandboxId:bc7fe679de4dc3fdff7f7e05bcd59ce354148a5c261197612bf284921530e902,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728914093233135044,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d8c37c1aa9e38ec5865c9c3159f1b5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942c179e591a9c0a8a1d869cfc5456dcbfb37c78056f256b241c51aab8936a3e,PodSandboxId:efaae5865d8afa77d2901173ba9c38ea901ca40f040d82cc15e889b37ff5a83c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728914093143514748,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c293b9606d38e94bf353b2714c2a069,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=22623d89-94e9-4197-9b0a-68263bcf8d31 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:19 ha-450021 crio[655]: time="2024-10-14 14:01:19.577316024Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3b7a0a90-dfe5-4ce4-8d11-7210aeb17792 name=/runtime.v1.RuntimeService/Version
	Oct 14 14:01:19 ha-450021 crio[655]: time="2024-10-14 14:01:19.577405444Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3b7a0a90-dfe5-4ce4-8d11-7210aeb17792 name=/runtime.v1.RuntimeService/Version
	Oct 14 14:01:19 ha-450021 crio[655]: time="2024-10-14 14:01:19.578357644Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=06c95fcd-9c99-4294-8f76-59f4030b8e1d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 14:01:19 ha-450021 crio[655]: time="2024-10-14 14:01:19.578964787Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914479578936530,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=06c95fcd-9c99-4294-8f76-59f4030b8e1d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 14:01:19 ha-450021 crio[655]: time="2024-10-14 14:01:19.579466854Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=021c0f5b-1b03-467d-b1ff-6f936219f6ba name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:19 ha-450021 crio[655]: time="2024-10-14 14:01:19.579538725Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=021c0f5b-1b03-467d-b1ff-6f936219f6ba name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:19 ha-450021 crio[655]: time="2024-10-14 14:01:19.579853001Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a41053c31fcb74ad24a4417c885436510a42c2e477d721651ae65459748bfd17,PodSandboxId:c3201918bd10d1535ddb2ebef0aa3b55e3e997e18a90de29ee09c2a7cb289b47,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728914259057513833,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fkz82,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07dccd61-4a5a-4d82-ba70-df7e6ff6bb4c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1051cfacf1c9fba1500a3437ece4de024c0fac626340151d2e28cbc18dc67a85,PodSandboxId:49d4b2387dd65dbd67bcdc3c377ba15e05400c782a4e2980358881a9c87ca5f3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728914119581188349,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1377adb3-3faf-4dee-a86e-9c4544a02d51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b17b6d38f935951dfa1746d02ec45095af8e06f6258ed80913feba7a10224927,PodSandboxId:b83407d74496b7f16cdeead48267cc803ffacd743feae034b1233a8c93800582,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728914119554752984,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-btfml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292e08ef-5eec-4ebb-acf5-5b4b03e47724,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:138a0b23a09075071550a4b7808439fd0baef4054fc6a7a7d4e8bc9a4457abfe,PodSandboxId:e862ae5ec13c39ac9605ac5725a1018466957149e1a69b2e013f7a87d5095bee,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728914119562072468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h5s6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf78614c-8f
22-48f9-8a16-cfcffecadfcc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15af89d835eebb58d825b5cdfdcbcfc064fe27d95caa6667adfb0e714974996,PodSandboxId:10ad22ab64de39acac4028e06deccb0ee0084112ba58c2349599913bf0d931d6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728914107455260455,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-c2xkn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f821123-80f9-4fe5-b64c-fb641ec185ea,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eec863af38c114b5058f678da27f8ce8608a5cd97566d4e704e07ff87100124,PodSandboxId:40a3318e89ae5bc2fe2d145b32f19e419934ba96586add9c17a653799fad9d26,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172891410
4698984942,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmbpv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e09737a1-c663-4951-b6cb-c0690fbd8153,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69f6cdf690df6514a349ce87c438a718209e9a098486e719653e5ac84d645899,PodSandboxId:dcc284c053db656af8f5da1c1a80672bfee0353e44ea6e4a01814f37351dad87,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17289140950
79963768,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67c899a1266c35ae5a8a71fac8e2760,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4efae268f9ec331abbf180a9264d60144b2a22485b89d39a46207f1c40454221,PodSandboxId:ce558cb07ca8f68689235cad5912b7da5a8f1c75775d2e5f2e7e823fe5127da9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728914093274186361,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d575d608bbdadce4a654f35576809ec,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fbfff3b334bde93db2f81855492434f8be70767826f2e33734ab52ad522a7a,PodSandboxId:ee3335073bb66b262b3eabf6a735be75c2ddcef2fa54aff9245585e26dd713f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728914093280862312,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca49fb553a9c26ea8ae634afb933e7b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ebec97dfd405a7e2c8ad77d0255ca029054cfb1090eba8d4d3851bdb68213e1,PodSandboxId:bc7fe679de4dc3fdff7f7e05bcd59ce354148a5c261197612bf284921530e902,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728914093233135044,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d8c37c1aa9e38ec5865c9c3159f1b5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942c179e591a9c0a8a1d869cfc5456dcbfb37c78056f256b241c51aab8936a3e,PodSandboxId:efaae5865d8afa77d2901173ba9c38ea901ca40f040d82cc15e889b37ff5a83c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728914093143514748,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c293b9606d38e94bf353b2714c2a069,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=021c0f5b-1b03-467d-b1ff-6f936219f6ba name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a41053c31fcb7       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   c3201918bd10d       busybox-7dff88458-fkz82
	1051cfacf1c9f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   49d4b2387dd65       storage-provisioner
	138a0b23a0907       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   e862ae5ec13c3       coredns-7c65d6cfc9-h5s6h
	b17b6d38f9359       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   b83407d74496b       coredns-7c65d6cfc9-btfml
	b15af89d835ee       docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387    6 minutes ago       Running             kindnet-cni               0                   10ad22ab64de3       kindnet-c2xkn
	5eec863af38c1       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   40a3318e89ae5       kube-proxy-dmbpv
	69f6cdf690df6       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   dcc284c053db6       kube-vip-ha-450021
	09fbfff3b334b       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   ee3335073bb66       kube-controller-manager-ha-450021
	4efae268f9ec3       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   ce558cb07ca8f       kube-scheduler-ha-450021
	6ebec97dfd405       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   bc7fe679de4dc       etcd-ha-450021
	942c179e591a9       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   efaae5865d8af       kube-apiserver-ha-450021
	
	
	==> coredns [138a0b23a09075071550a4b7808439fd0baef4054fc6a7a7d4e8bc9a4457abfe] <==
	[INFO] 10.244.1.2:43382 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000121511s
	[INFO] 10.244.1.2:47675 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001762532s
	[INFO] 10.244.0.4:45515 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000083904s
	[INFO] 10.244.0.4:48451 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000149827s
	[INFO] 10.244.0.4:36014 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00015272s
	[INFO] 10.244.2.2:40959 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000194596s
	[INFO] 10.244.2.2:44151 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000212714s
	[INFO] 10.244.2.2:55911 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089682s
	[INFO] 10.244.1.2:47272 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001299918s
	[INFO] 10.244.1.2:44591 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078031s
	[INFO] 10.244.1.2:37471 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072637s
	[INFO] 10.244.0.4:52930 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152779s
	[INFO] 10.244.0.4:33266 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005592s
	[INFO] 10.244.2.2:36389 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000275257s
	[INFO] 10.244.2.2:43232 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010928s
	[INFO] 10.244.2.2:38102 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092762s
	[INFO] 10.244.1.2:55403 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000222145s
	[INFO] 10.244.1.2:52540 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102916s
	[INFO] 10.244.0.4:54154 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135993s
	[INFO] 10.244.0.4:36974 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000196993s
	[INFO] 10.244.0.4:54725 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000084888s
	[INFO] 10.244.2.2:57068 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000174437s
	[INFO] 10.244.1.2:46234 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191287s
	[INFO] 10.244.1.2:39695 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000080939s
	[INFO] 10.244.1.2:36634 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000064427s
	
	
	==> coredns [b17b6d38f935951dfa1746d02ec45095af8e06f6258ed80913feba7a10224927] <==
	[INFO] 10.244.0.4:50854 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009051191s
	[INFO] 10.244.0.4:34637 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000156712s
	[INFO] 10.244.0.4:33648 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081153s
	[INFO] 10.244.0.4:57465 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003251096s
	[INFO] 10.244.0.4:51433 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000118067s
	[INFO] 10.244.2.2:37621 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200056s
	[INFO] 10.244.2.2:41751 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001978554s
	[INFO] 10.244.2.2:33044 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001486731s
	[INFO] 10.244.2.2:43102 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00010457s
	[INFO] 10.244.2.2:36141 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000183057s
	[INFO] 10.244.1.2:35260 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014156s
	[INFO] 10.244.1.2:40737 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00207375s
	[INFO] 10.244.1.2:34377 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000109225s
	[INFO] 10.244.1.2:48194 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096468s
	[INFO] 10.244.1.2:53649 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000092891s
	[INFO] 10.244.0.4:39691 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000126403s
	[INFO] 10.244.0.4:59011 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094158s
	[INFO] 10.244.2.2:46754 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133215s
	[INFO] 10.244.1.2:44424 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000161779s
	[INFO] 10.244.1.2:36322 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010124s
	[INFO] 10.244.0.4:56787 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000305054s
	[INFO] 10.244.2.2:56511 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168323s
	[INFO] 10.244.2.2:35510 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000291052s
	[INFO] 10.244.2.2:56208 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000174753s
	[INFO] 10.244.1.2:41964 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000119677s
	
	
	==> describe nodes <==
	Name:               ha-450021
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-450021
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=ha-450021
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_14T13_55_00_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 13:54:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-450021
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 14:01:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 13:58:03 +0000   Mon, 14 Oct 2024 13:54:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 13:58:03 +0000   Mon, 14 Oct 2024 13:54:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 13:58:03 +0000   Mon, 14 Oct 2024 13:54:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 13:58:03 +0000   Mon, 14 Oct 2024 13:55:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.176
	  Hostname:    ha-450021
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0546a3427732401daacd4235ad46d465
	  System UUID:                0546a342-7732-401d-aacd-4235ad46d465
	  Boot ID:                    19dd080e-b9f2-467d-b5f2-41dbb07e1880
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-fkz82              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 coredns-7c65d6cfc9-btfml             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m15s
	  kube-system                 coredns-7c65d6cfc9-h5s6h             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m15s
	  kube-system                 etcd-ha-450021                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m20s
	  kube-system                 kindnet-c2xkn                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m15s
	  kube-system                 kube-apiserver-ha-450021             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 kube-controller-manager-ha-450021    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 kube-proxy-dmbpv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m15s
	  kube-system                 kube-scheduler-ha-450021             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 kube-vip-ha-450021                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m20s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m14s                  kube-proxy       
	  Normal  NodeHasSufficientPID     6m27s (x7 over 6m27s)  kubelet          Node ha-450021 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m27s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m27s (x8 over 6m27s)  kubelet          Node ha-450021 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m27s (x8 over 6m27s)  kubelet          Node ha-450021 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m20s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m20s                  kubelet          Node ha-450021 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m20s                  kubelet          Node ha-450021 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m20s                  kubelet          Node ha-450021 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m16s                  node-controller  Node ha-450021 event: Registered Node ha-450021 in Controller
	  Normal  NodeReady                6m1s                   kubelet          Node ha-450021 status is now: NodeReady
	  Normal  RegisteredNode           5m14s                  node-controller  Node ha-450021 event: Registered Node ha-450021 in Controller
	  Normal  RegisteredNode           3m59s                  node-controller  Node ha-450021 event: Registered Node ha-450021 in Controller
	
	
	Name:               ha-450021-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-450021-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=ha-450021
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_14T13_55_59_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 13:55:56 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-450021-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 13:58:49 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 14 Oct 2024 13:57:58 +0000   Mon, 14 Oct 2024 13:59:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 14 Oct 2024 13:57:58 +0000   Mon, 14 Oct 2024 13:59:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 14 Oct 2024 13:57:58 +0000   Mon, 14 Oct 2024 13:59:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 14 Oct 2024 13:57:58 +0000   Mon, 14 Oct 2024 13:59:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.89
	  Hostname:    ha-450021-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a42e43dc14cb4b949c605bff9ac6e0d6
	  System UUID:                a42e43dc-14cb-4b94-9c60-5bff9ac6e0d6
	  Boot ID:                    479e9a18-0fa8-4366-8acf-af40a06156d6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nt6q5                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 etcd-ha-450021-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m22s
	  kube-system                 kindnet-2ghzc                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m23s
	  kube-system                 kube-apiserver-ha-450021-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m22s
	  kube-system                 kube-controller-manager-ha-450021-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 kube-proxy-v24tf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m23s
	  kube-system                 kube-scheduler-ha-450021-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m20s
	  kube-system                 kube-vip-ha-450021-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m19s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m24s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m23s (x8 over 5m24s)  kubelet          Node ha-450021-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m23s (x8 over 5m24s)  kubelet          Node ha-450021-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m23s (x7 over 5m24s)  kubelet          Node ha-450021-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m21s                  node-controller  Node ha-450021-m02 event: Registered Node ha-450021-m02 in Controller
	  Normal  RegisteredNode           5m14s                  node-controller  Node ha-450021-m02 event: Registered Node ha-450021-m02 in Controller
	  Normal  RegisteredNode           3m59s                  node-controller  Node ha-450021-m02 event: Registered Node ha-450021-m02 in Controller
	  Normal  NodeNotReady             109s                   node-controller  Node ha-450021-m02 status is now: NodeNotReady
	
	
	Name:               ha-450021-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-450021-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=ha-450021
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_14T13_57_14_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 13:57:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-450021-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 14:01:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 13:57:40 +0000   Mon, 14 Oct 2024 13:57:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 13:57:40 +0000   Mon, 14 Oct 2024 13:57:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 13:57:40 +0000   Mon, 14 Oct 2024 13:57:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 13:57:40 +0000   Mon, 14 Oct 2024 13:57:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.55
	  Hostname:    ha-450021-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 50171e2610d047279285af0bf8eead91
	  System UUID:                50171e26-10d0-4727-9285-af0bf8eead91
	  Boot ID:                    7b6afcf4-f39b-41c1-92d6-cc1e18f2f3ff
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-lrvnn                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 etcd-ha-450021-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m7s
	  kube-system                 kindnet-7jwgx                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m9s
	  kube-system                 kube-apiserver-ha-450021-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-controller-manager-ha-450021-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-proxy-9tbfp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 kube-scheduler-ha-450021-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 kube-vip-ha-450021-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m3s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  4m9s (x8 over 4m9s)  kubelet          Node ha-450021-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s (x8 over 4m9s)  kubelet          Node ha-450021-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s (x7 over 4m9s)  kubelet          Node ha-450021-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m6s                 node-controller  Node ha-450021-m03 event: Registered Node ha-450021-m03 in Controller
	  Normal  RegisteredNode           4m4s                 node-controller  Node ha-450021-m03 event: Registered Node ha-450021-m03 in Controller
	  Normal  RegisteredNode           3m59s                node-controller  Node ha-450021-m03 event: Registered Node ha-450021-m03 in Controller
	
	
	Name:               ha-450021-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-450021-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=ha-450021
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_14T13_58_15_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 13:58:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-450021-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 14:01:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 13:58:45 +0000   Mon, 14 Oct 2024 13:58:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 13:58:45 +0000   Mon, 14 Oct 2024 13:58:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 13:58:45 +0000   Mon, 14 Oct 2024 13:58:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 13:58:45 +0000   Mon, 14 Oct 2024 13:58:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.127
	  Hostname:    ha-450021-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c8da54fea409461c84c103e8552a3553
	  System UUID:                c8da54fe-a409-461c-84c1-03e8552a3553
	  Boot ID:                    ed9b9ad9-a71a-4814-ae07-6cc1c2775deb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-478bj       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m6s
	  kube-system                 kube-proxy-2mfnd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m                   kube-proxy       
	  Normal  NodeHasSufficientMemory  3m6s (x2 over 3m7s)  kubelet          Node ha-450021-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m6s (x2 over 3m7s)  kubelet          Node ha-450021-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m6s (x2 over 3m7s)  kubelet          Node ha-450021-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m6s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m5s                 node-controller  Node ha-450021-m04 event: Registered Node ha-450021-m04 in Controller
	  Normal  RegisteredNode           3m5s                 node-controller  Node ha-450021-m04 event: Registered Node ha-450021-m04 in Controller
	  Normal  RegisteredNode           3m2s                 node-controller  Node ha-450021-m04 event: Registered Node ha-450021-m04 in Controller
	  Normal  NodeReady                2m48s                kubelet          Node ha-450021-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct14 13:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050735] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040529] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.861908] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.617931] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.603277] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.339591] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.056090] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067047] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.182956] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.129853] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.268814] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +3.909642] systemd-fstab-generator[746]: Ignoring "noauto" option for root device
	[  +4.099441] systemd-fstab-generator[876]: Ignoring "noauto" option for root device
	[  +0.067805] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.555395] systemd-fstab-generator[1292]: Ignoring "noauto" option for root device
	[  +0.098328] kauditd_printk_skb: 79 callbacks suppressed
	[Oct14 13:55] kauditd_printk_skb: 18 callbacks suppressed
	[ +14.850947] kauditd_printk_skb: 41 callbacks suppressed
	[Oct14 13:56] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [6ebec97dfd405a7e2c8ad77d0255ca029054cfb1090eba8d4d3851bdb68213e1] <==
	{"level":"warn","ts":"2024-10-14T14:01:19.882923Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:19.886436Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:19.895028Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:19.901089Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:19.909100Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:19.912552Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:19.915775Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:19.918713Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:19.924447Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:19.931330Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:19.937268Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:19.941198Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:19.945206Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:19.952992Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:19.960441Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:19.964165Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:19.975493Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:19.977398Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:19.985698Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:19.990856Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:19.995815Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:20.001732Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:20.011151Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:20.018302Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:20.060268Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 14:01:20 up 6 min,  0 users,  load average: 0.22, 0.20, 0.10
	Linux ha-450021 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b15af89d835eebb58d825b5cdfdcbcfc064fe27d95caa6667adfb0e714974996] <==
	I1014 14:00:48.802211       1 main.go:323] Node ha-450021-m04 has CIDR [10.244.3.0/24] 
	I1014 14:00:58.792229       1 main.go:296] Handling node with IPs: map[192.168.39.89:{}]
	I1014 14:00:58.792335       1 main.go:323] Node ha-450021-m02 has CIDR [10.244.1.0/24] 
	I1014 14:00:58.792702       1 main.go:296] Handling node with IPs: map[192.168.39.55:{}]
	I1014 14:00:58.792738       1 main.go:323] Node ha-450021-m03 has CIDR [10.244.2.0/24] 
	I1014 14:00:58.792927       1 main.go:296] Handling node with IPs: map[192.168.39.127:{}]
	I1014 14:00:58.793022       1 main.go:323] Node ha-450021-m04 has CIDR [10.244.3.0/24] 
	I1014 14:00:58.793206       1 main.go:296] Handling node with IPs: map[192.168.39.176:{}]
	I1014 14:00:58.793233       1 main.go:300] handling current node
	I1014 14:01:08.792774       1 main.go:296] Handling node with IPs: map[192.168.39.127:{}]
	I1014 14:01:08.792894       1 main.go:323] Node ha-450021-m04 has CIDR [10.244.3.0/24] 
	I1014 14:01:08.793209       1 main.go:296] Handling node with IPs: map[192.168.39.176:{}]
	I1014 14:01:08.793270       1 main.go:300] handling current node
	I1014 14:01:08.793308       1 main.go:296] Handling node with IPs: map[192.168.39.89:{}]
	I1014 14:01:08.793385       1 main.go:323] Node ha-450021-m02 has CIDR [10.244.1.0/24] 
	I1014 14:01:08.793725       1 main.go:296] Handling node with IPs: map[192.168.39.55:{}]
	I1014 14:01:08.793788       1 main.go:323] Node ha-450021-m03 has CIDR [10.244.2.0/24] 
	I1014 14:01:18.792871       1 main.go:296] Handling node with IPs: map[192.168.39.176:{}]
	I1014 14:01:18.792903       1 main.go:300] handling current node
	I1014 14:01:18.792918       1 main.go:296] Handling node with IPs: map[192.168.39.89:{}]
	I1014 14:01:18.792922       1 main.go:323] Node ha-450021-m02 has CIDR [10.244.1.0/24] 
	I1014 14:01:18.793175       1 main.go:296] Handling node with IPs: map[192.168.39.55:{}]
	I1014 14:01:18.793264       1 main.go:323] Node ha-450021-m03 has CIDR [10.244.2.0/24] 
	I1014 14:01:18.793419       1 main.go:296] Handling node with IPs: map[192.168.39.127:{}]
	I1014 14:01:18.793492       1 main.go:323] Node ha-450021-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [942c179e591a9c0a8a1d869cfc5456dcbfb37c78056f256b241c51aab8936a3e] <==
	I1014 13:54:59.598140       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1014 13:54:59.663013       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1014 13:54:59.717856       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1014 13:55:03.816892       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1014 13:55:04.117644       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1014 13:55:56.847231       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1014 13:55:56.847740       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 10.384µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1014 13:55:56.849144       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1014 13:55:56.850518       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1014 13:55:56.851864       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.726003ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E1014 13:57:40.356093       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42006: use of closed network connection
	E1014 13:57:40.548948       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42022: use of closed network connection
	E1014 13:57:40.734061       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42040: use of closed network connection
	E1014 13:57:40.931904       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42056: use of closed network connection
	E1014 13:57:41.132089       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42064: use of closed network connection
	E1014 13:57:41.311104       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42080: use of closed network connection
	E1014 13:57:41.483753       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42086: use of closed network connection
	E1014 13:57:41.673306       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42104: use of closed network connection
	E1014 13:57:41.861924       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41084: use of closed network connection
	E1014 13:57:42.155414       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41118: use of closed network connection
	E1014 13:57:42.326032       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41138: use of closed network connection
	E1014 13:57:42.498111       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41150: use of closed network connection
	E1014 13:57:42.666091       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41168: use of closed network connection
	E1014 13:57:42.837965       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41180: use of closed network connection
	E1014 13:57:43.032348       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41204: use of closed network connection
	
	
	==> kube-controller-manager [09fbfff3b334bde93db2f81855492434f8be70767826f2e33734ab52ad522a7a] <==
	I1014 13:58:14.814158       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:14.814232       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	E1014 13:58:14.983101       1 daemon_controller.go:329] "Unhandled Error" err="kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kindnet\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"131c0255-c34c-4638-a6ae-c00d282c1fc8\", ResourceVersion:\"944\", Generation:1, CreationTimestamp:time.Date(2024, time.October, 14, 13, 55, 0, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\", \"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"},\\\"name\\\":\\\"kindnet\\\"
,\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"app\\\":\\\"kindnet\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"env\\\":[{\\\"name\\\":\\\"HOST_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.hostIP\\\"}}},{\\\"name\\\":\\\"POD_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.podIP\\\"}}},{\\\"name\\\":\\\"POD_SUBNET\\\",\\\"value\\\":\\\"10.244.0.0/16\\\"}],\\\"image\\\":\\\"docker.io/kindest/kindnetd:v20241007-36f62932\\\",\\\"name\\\":\\\"kindnet-cni\\\",\\\"resources\\\":{\\\"limits\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"capabilities\\\":{\\\"add\\\":[\\\"NET_RAW\\\",\\\"NET_ADMIN\\\"]},\\\"privileged\\\":false},\\\"volumeMounts\\\":[{\\\"mountPath\\\
":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"cni-cfg\\\"},{\\\"mountPath\\\":\\\"/run/xtables.lock\\\",\\\"name\\\":\\\"xtables-lock\\\",\\\"readOnly\\\":false},{\\\"mountPath\\\":\\\"/lib/modules\\\",\\\"name\\\":\\\"lib-modules\\\",\\\"readOnly\\\":true}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"kindnet\\\",\\\"tolerations\\\":[{\\\"effect\\\":\\\"NoSchedule\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/etc/cni/net.d\\\",\\\"type\\\":\\\"DirectoryOrCreate\\\"},\\\"name\\\":\\\"cni-cfg\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/run/xtables.lock\\\",\\\"type\\\":\\\"FileOrCreate\\\"},\\\"name\\\":\\\"xtables-lock\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/lib/modules\\\"},\\\"name\\\":\\\"lib-modules\\\"}]}}}}\\n\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000d57240), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"
\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"cni-cfg\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00075b248), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeC
laimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00075b260), EmptyDir:(*v1.EmptyDirVolumeSource)(
nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxV
olumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00075b278), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), Azu
reFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kindnet-cni\", Image:\"docker.io/kindest/kindnetd:v20241007-36f62932\", Command:[]string(nil), Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"HOST_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc000d57280)}, v1.EnvVar{Name:\"POD_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSo
urce)(0xc000d57300)}, v1.EnvVar{Name:\"POD_SUBNET\", Value:\"10.244.0.0/16\", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Requests:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"cni-cfg\", ReadOnly:false
, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/etc/cni/net.d\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc001b502a0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralConta
iner(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc001820428), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string(nil), ServiceAccountName:\"kindnet\", DeprecatedServiceAccount:\"kindnet\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001d51480), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"NoSchedule\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Ov
erhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001e15e60)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001820470)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:3, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:3, ObservedGeneration:1, UpdatedNumberScheduled:3, NumberAvailable:3, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kindnet\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1014 13:58:14.983373       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:15.178688       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:15.243657       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:15.340286       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:15.399942       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:18.263248       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:18.263850       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-450021-m04"
	I1014 13:58:18.322338       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:24.991672       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:32.758209       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:32.758699       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-450021-m04"
	I1014 13:58:32.779681       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:33.281205       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:45.471689       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:59:30.147306       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-450021-m04"
	I1014 13:59:30.148143       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m02"
	I1014 13:59:30.170693       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m02"
	I1014 13:59:30.349046       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="34.558914ms"
	I1014 13:59:30.349473       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="165.118µs"
	I1014 13:59:33.404625       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m02"
	I1014 13:59:35.409214       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m02"
	
	
	==> kube-proxy [5eec863af38c114b5058f678da27f8ce8608a5cd97566d4e704e07ff87100124] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1014 13:55:05.027976       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1014 13:55:05.042612       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.176"]
	E1014 13:55:05.042701       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 13:55:05.077520       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1014 13:55:05.077626       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 13:55:05.077653       1 server_linux.go:169] "Using iptables Proxier"
	I1014 13:55:05.080947       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 13:55:05.081416       1 server.go:483] "Version info" version="v1.31.1"
	I1014 13:55:05.081449       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 13:55:05.084048       1 config.go:199] "Starting service config controller"
	I1014 13:55:05.084244       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 13:55:05.084407       1 config.go:105] "Starting endpoint slice config controller"
	I1014 13:55:05.084429       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 13:55:05.085497       1 config.go:328] "Starting node config controller"
	I1014 13:55:05.085525       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 13:55:05.185149       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1014 13:55:05.185195       1 shared_informer.go:320] Caches are synced for service config
	I1014 13:55:05.185638       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4efae268f9ec331abbf180a9264d60144b2a22485b89d39a46207f1c40454221] <==
	W1014 13:54:57.431755       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1014 13:54:57.431801       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 13:54:57.619315       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1014 13:54:57.619367       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 13:54:57.631913       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1014 13:54:57.632033       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1014 13:54:57.666200       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1014 13:54:57.666268       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 13:54:57.675854       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1014 13:54:57.675918       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 13:54:57.682854       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1014 13:54:57.683283       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1014 13:54:57.820025       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1014 13:54:57.820087       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 13:55:00.246826       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1014 13:57:36.278433       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-fkz82\": pod busybox-7dff88458-fkz82 is already assigned to node \"ha-450021\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-fkz82" node="ha-450021"
	E1014 13:57:36.278688       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 07dccd61-4a5a-4d82-ba70-df7e6ff6bb4c(default/busybox-7dff88458-fkz82) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-fkz82"
	E1014 13:57:36.278737       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-fkz82\": pod busybox-7dff88458-fkz82 is already assigned to node \"ha-450021\"" pod="default/busybox-7dff88458-fkz82"
	I1014 13:57:36.278788       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-fkz82" node="ha-450021"
	E1014 13:57:36.279144       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lrvnn\": pod busybox-7dff88458-lrvnn is already assigned to node \"ha-450021-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-lrvnn" node="ha-450021-m03"
	E1014 13:57:36.279201       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c0e6c9da-2bbd-4814-9310-ab74d5a3e09d(default/busybox-7dff88458-lrvnn) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-lrvnn"
	E1014 13:57:36.279240       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lrvnn\": pod busybox-7dff88458-lrvnn is already assigned to node \"ha-450021-m03\"" pod="default/busybox-7dff88458-lrvnn"
	I1014 13:57:36.279273       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-lrvnn" node="ha-450021-m03"
	E1014 13:58:14.867309       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-2mfnd\": pod kube-proxy-2mfnd is already assigned to node \"ha-450021-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-2mfnd" node="ha-450021-m04"
	E1014 13:58:14.867404       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-2mfnd\": pod kube-proxy-2mfnd is already assigned to node \"ha-450021-m04\"" pod="kube-system/kube-proxy-2mfnd"
	
	
	==> kubelet <==
	Oct 14 13:59:59 ha-450021 kubelet[1299]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 13:59:59 ha-450021 kubelet[1299]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 13:59:59 ha-450021 kubelet[1299]: E1014 13:59:59.850190    1299 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914399849941739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:59:59 ha-450021 kubelet[1299]: E1014 13:59:59.850218    1299 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914399849941739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:09 ha-450021 kubelet[1299]: E1014 14:00:09.852474    1299 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914409852112835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:09 ha-450021 kubelet[1299]: E1014 14:00:09.852527    1299 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914409852112835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:19 ha-450021 kubelet[1299]: E1014 14:00:19.856761    1299 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914419856453814,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:19 ha-450021 kubelet[1299]: E1014 14:00:19.856806    1299 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914419856453814,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:29 ha-450021 kubelet[1299]: E1014 14:00:29.858206    1299 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914429857922237,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:29 ha-450021 kubelet[1299]: E1014 14:00:29.858470    1299 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914429857922237,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:39 ha-450021 kubelet[1299]: E1014 14:00:39.861764    1299 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914439861102356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:39 ha-450021 kubelet[1299]: E1014 14:00:39.861870    1299 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914439861102356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:49 ha-450021 kubelet[1299]: E1014 14:00:49.864513    1299 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914449864091872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:49 ha-450021 kubelet[1299]: E1014 14:00:49.864550    1299 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914449864091872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:59 ha-450021 kubelet[1299]: E1014 14:00:59.724357    1299 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:00:59 ha-450021 kubelet[1299]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:00:59 ha-450021 kubelet[1299]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:00:59 ha-450021 kubelet[1299]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:00:59 ha-450021 kubelet[1299]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 14:00:59 ha-450021 kubelet[1299]: E1014 14:00:59.866616    1299 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914459866140857,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:59 ha-450021 kubelet[1299]: E1014 14:00:59.866661    1299 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914459866140857,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:01:09 ha-450021 kubelet[1299]: E1014 14:01:09.869535    1299 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914469868732835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:01:09 ha-450021 kubelet[1299]: E1014 14:01:09.869642    1299 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914469868732835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:01:19 ha-450021 kubelet[1299]: E1014 14:01:19.870997    1299 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914479870763162,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:01:19 ha-450021 kubelet[1299]: E1014 14:01:19.871040    1299 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914479870763162,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-450021 -n ha-450021
E1014 14:01:20.856576   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/functional-917108/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:261: (dbg) Run:  kubectl --context ha-450021 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 node start m02 -v=7 --alsologtostderr
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-450021 status -v=7 --alsologtostderr: (3.905120584s)
ha_test.go:437: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-450021 status -v=7 --alsologtostderr": 
ha_test.go:440: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-450021 status -v=7 --alsologtostderr": 
ha_test.go:443: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-450021 status -v=7 --alsologtostderr": 
ha_test.go:446: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-450021 status -v=7 --alsologtostderr": 
ha_test.go:450: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-450021 -n ha-450021
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-450021 logs -n 25: (1.395967509s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-450021 cp ha-450021-m03:/home/docker/cp-test.txt                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021:/home/docker/cp-test_ha-450021-m03_ha-450021.txt                       |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n ha-450021 sudo cat                                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /home/docker/cp-test_ha-450021-m03_ha-450021.txt                                 |           |         |         |                     |                     |
	| cp      | ha-450021 cp ha-450021-m03:/home/docker/cp-test.txt                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m02:/home/docker/cp-test_ha-450021-m03_ha-450021-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n ha-450021-m02 sudo cat                                          | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /home/docker/cp-test_ha-450021-m03_ha-450021-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-450021 cp ha-450021-m03:/home/docker/cp-test.txt                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04:/home/docker/cp-test_ha-450021-m03_ha-450021-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n ha-450021-m04 sudo cat                                          | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /home/docker/cp-test_ha-450021-m03_ha-450021-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-450021 cp testdata/cp-test.txt                                                | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-450021 cp ha-450021-m04:/home/docker/cp-test.txt                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3029314565/001/cp-test_ha-450021-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-450021 cp ha-450021-m04:/home/docker/cp-test.txt                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021:/home/docker/cp-test_ha-450021-m04_ha-450021.txt                       |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n ha-450021 sudo cat                                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /home/docker/cp-test_ha-450021-m04_ha-450021.txt                                 |           |         |         |                     |                     |
	| cp      | ha-450021 cp ha-450021-m04:/home/docker/cp-test.txt                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m02:/home/docker/cp-test_ha-450021-m04_ha-450021-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n ha-450021-m02 sudo cat                                          | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /home/docker/cp-test_ha-450021-m04_ha-450021-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-450021 cp ha-450021-m04:/home/docker/cp-test.txt                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m03:/home/docker/cp-test_ha-450021-m04_ha-450021-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n ha-450021-m03 sudo cat                                          | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /home/docker/cp-test_ha-450021-m04_ha-450021-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-450021 node stop m02 -v=7                                                     | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-450021 node start m02 -v=7                                                    | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 14:01 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 13:54:19
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 13:54:19.812271   25306 out.go:345] Setting OutFile to fd 1 ...
	I1014 13:54:19.812610   25306 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:54:19.812625   25306 out.go:358] Setting ErrFile to fd 2...
	I1014 13:54:19.812632   25306 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:54:19.813049   25306 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
	I1014 13:54:19.813610   25306 out.go:352] Setting JSON to false
	I1014 13:54:19.814483   25306 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2210,"bootTime":1728911850,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 13:54:19.814571   25306 start.go:139] virtualization: kvm guest
	I1014 13:54:19.816884   25306 out.go:177] * [ha-450021] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1014 13:54:19.818710   25306 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 13:54:19.818708   25306 notify.go:220] Checking for updates...
	I1014 13:54:19.821425   25306 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 13:54:19.822777   25306 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 13:54:19.824007   25306 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 13:54:19.825232   25306 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 13:54:19.826443   25306 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 13:54:19.827738   25306 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 13:54:19.861394   25306 out.go:177] * Using the kvm2 driver based on user configuration
	I1014 13:54:19.862707   25306 start.go:297] selected driver: kvm2
	I1014 13:54:19.862720   25306 start.go:901] validating driver "kvm2" against <nil>
	I1014 13:54:19.862734   25306 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 13:54:19.863393   25306 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 13:54:19.863486   25306 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19790-7836/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 13:54:19.878143   25306 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1014 13:54:19.878185   25306 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 13:54:19.878407   25306 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 13:54:19.878437   25306 cni.go:84] Creating CNI manager for ""
	I1014 13:54:19.878478   25306 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1014 13:54:19.878486   25306 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 13:54:19.878530   25306 start.go:340] cluster config:
	{Name:ha-450021 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-450021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1014 13:54:19.878657   25306 iso.go:125] acquiring lock: {Name:mk2e2e780b05ead4007a93e6b56d28c6081926bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 13:54:19.881226   25306 out.go:177] * Starting "ha-450021" primary control-plane node in "ha-450021" cluster
	I1014 13:54:19.882326   25306 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 13:54:19.882357   25306 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1014 13:54:19.882366   25306 cache.go:56] Caching tarball of preloaded images
	I1014 13:54:19.882441   25306 preload.go:172] Found /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 13:54:19.882451   25306 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1014 13:54:19.882789   25306 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/config.json ...
	I1014 13:54:19.882811   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/config.json: {Name:mk7e7a81dd8e8c0d913c7421cc0d458f1e8a36b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:54:19.882936   25306 start.go:360] acquireMachinesLock for ha-450021: {Name:mk0b928e3a7c606d1470b2064477a22c5e59969d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 13:54:19.882963   25306 start.go:364] duration metric: took 16.489µs to acquireMachinesLock for "ha-450021"
	I1014 13:54:19.882982   25306 start.go:93] Provisioning new machine with config: &{Name:ha-450021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-450021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 13:54:19.883029   25306 start.go:125] createHost starting for "" (driver="kvm2")
	I1014 13:54:19.884643   25306 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 13:54:19.884761   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:54:19.884802   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:54:19.899595   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35743
	I1014 13:54:19.900085   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:54:19.900603   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:54:19.900622   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:54:19.900928   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:54:19.901089   25306 main.go:141] libmachine: (ha-450021) Calling .GetMachineName
	I1014 13:54:19.901224   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:54:19.901350   25306 start.go:159] libmachine.API.Create for "ha-450021" (driver="kvm2")
	I1014 13:54:19.901382   25306 client.go:168] LocalClient.Create starting
	I1014 13:54:19.901414   25306 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem
	I1014 13:54:19.901441   25306 main.go:141] libmachine: Decoding PEM data...
	I1014 13:54:19.901454   25306 main.go:141] libmachine: Parsing certificate...
	I1014 13:54:19.901498   25306 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem
	I1014 13:54:19.901515   25306 main.go:141] libmachine: Decoding PEM data...
	I1014 13:54:19.901544   25306 main.go:141] libmachine: Parsing certificate...
	I1014 13:54:19.901570   25306 main.go:141] libmachine: Running pre-create checks...
	I1014 13:54:19.901582   25306 main.go:141] libmachine: (ha-450021) Calling .PreCreateCheck
	I1014 13:54:19.901916   25306 main.go:141] libmachine: (ha-450021) Calling .GetConfigRaw
	I1014 13:54:19.902252   25306 main.go:141] libmachine: Creating machine...
	I1014 13:54:19.902264   25306 main.go:141] libmachine: (ha-450021) Calling .Create
	I1014 13:54:19.902384   25306 main.go:141] libmachine: (ha-450021) Creating KVM machine...
	I1014 13:54:19.903685   25306 main.go:141] libmachine: (ha-450021) DBG | found existing default KVM network
	I1014 13:54:19.904369   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:19.904236   25330 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211e0}
	I1014 13:54:19.904404   25306 main.go:141] libmachine: (ha-450021) DBG | created network xml: 
	I1014 13:54:19.904424   25306 main.go:141] libmachine: (ha-450021) DBG | <network>
	I1014 13:54:19.904433   25306 main.go:141] libmachine: (ha-450021) DBG |   <name>mk-ha-450021</name>
	I1014 13:54:19.904439   25306 main.go:141] libmachine: (ha-450021) DBG |   <dns enable='no'/>
	I1014 13:54:19.904447   25306 main.go:141] libmachine: (ha-450021) DBG |   
	I1014 13:54:19.904459   25306 main.go:141] libmachine: (ha-450021) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1014 13:54:19.904466   25306 main.go:141] libmachine: (ha-450021) DBG |     <dhcp>
	I1014 13:54:19.904474   25306 main.go:141] libmachine: (ha-450021) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1014 13:54:19.904486   25306 main.go:141] libmachine: (ha-450021) DBG |     </dhcp>
	I1014 13:54:19.904496   25306 main.go:141] libmachine: (ha-450021) DBG |   </ip>
	I1014 13:54:19.904507   25306 main.go:141] libmachine: (ha-450021) DBG |   
	I1014 13:54:19.904513   25306 main.go:141] libmachine: (ha-450021) DBG | </network>
	I1014 13:54:19.904522   25306 main.go:141] libmachine: (ha-450021) DBG | 
	I1014 13:54:19.910040   25306 main.go:141] libmachine: (ha-450021) DBG | trying to create private KVM network mk-ha-450021 192.168.39.0/24...
	I1014 13:54:19.971833   25306 main.go:141] libmachine: (ha-450021) DBG | private KVM network mk-ha-450021 192.168.39.0/24 created
	I1014 13:54:19.971862   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:19.971805   25330 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 13:54:19.971874   25306 main.go:141] libmachine: (ha-450021) Setting up store path in /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021 ...
	I1014 13:54:19.971891   25306 main.go:141] libmachine: (ha-450021) Building disk image from file:///home/jenkins/minikube-integration/19790-7836/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1014 13:54:19.971967   25306 main.go:141] libmachine: (ha-450021) Downloading /home/jenkins/minikube-integration/19790-7836/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19790-7836/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1014 13:54:20.214152   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:20.214048   25330 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa...
	I1014 13:54:20.270347   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:20.270208   25330 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/ha-450021.rawdisk...
	I1014 13:54:20.270384   25306 main.go:141] libmachine: (ha-450021) DBG | Writing magic tar header
	I1014 13:54:20.270399   25306 main.go:141] libmachine: (ha-450021) DBG | Writing SSH key tar header
	I1014 13:54:20.270411   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:20.270359   25330 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021 ...
	I1014 13:54:20.270469   25306 main.go:141] libmachine: (ha-450021) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021
	I1014 13:54:20.270577   25306 main.go:141] libmachine: (ha-450021) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021 (perms=drwx------)
	I1014 13:54:20.270629   25306 main.go:141] libmachine: (ha-450021) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube/machines (perms=drwxr-xr-x)
	I1014 13:54:20.270649   25306 main.go:141] libmachine: (ha-450021) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube/machines
	I1014 13:54:20.270663   25306 main.go:141] libmachine: (ha-450021) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 13:54:20.270676   25306 main.go:141] libmachine: (ha-450021) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube (perms=drwxr-xr-x)
	I1014 13:54:20.270690   25306 main.go:141] libmachine: (ha-450021) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836 (perms=drwxrwxr-x)
	I1014 13:54:20.270697   25306 main.go:141] libmachine: (ha-450021) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1014 13:54:20.270707   25306 main.go:141] libmachine: (ha-450021) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1014 13:54:20.270716   25306 main.go:141] libmachine: (ha-450021) Creating domain...
	I1014 13:54:20.270725   25306 main.go:141] libmachine: (ha-450021) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836
	I1014 13:54:20.270732   25306 main.go:141] libmachine: (ha-450021) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1014 13:54:20.270758   25306 main.go:141] libmachine: (ha-450021) DBG | Checking permissions on dir: /home/jenkins
	I1014 13:54:20.270778   25306 main.go:141] libmachine: (ha-450021) DBG | Checking permissions on dir: /home
	I1014 13:54:20.270791   25306 main.go:141] libmachine: (ha-450021) DBG | Skipping /home - not owner
	I1014 13:54:20.271873   25306 main.go:141] libmachine: (ha-450021) define libvirt domain using xml: 
	I1014 13:54:20.271895   25306 main.go:141] libmachine: (ha-450021) <domain type='kvm'>
	I1014 13:54:20.271904   25306 main.go:141] libmachine: (ha-450021)   <name>ha-450021</name>
	I1014 13:54:20.271909   25306 main.go:141] libmachine: (ha-450021)   <memory unit='MiB'>2200</memory>
	I1014 13:54:20.271915   25306 main.go:141] libmachine: (ha-450021)   <vcpu>2</vcpu>
	I1014 13:54:20.271922   25306 main.go:141] libmachine: (ha-450021)   <features>
	I1014 13:54:20.271942   25306 main.go:141] libmachine: (ha-450021)     <acpi/>
	I1014 13:54:20.271950   25306 main.go:141] libmachine: (ha-450021)     <apic/>
	I1014 13:54:20.271956   25306 main.go:141] libmachine: (ha-450021)     <pae/>
	I1014 13:54:20.271997   25306 main.go:141] libmachine: (ha-450021)     
	I1014 13:54:20.272026   25306 main.go:141] libmachine: (ha-450021)   </features>
	I1014 13:54:20.272048   25306 main.go:141] libmachine: (ha-450021)   <cpu mode='host-passthrough'>
	I1014 13:54:20.272058   25306 main.go:141] libmachine: (ha-450021)   
	I1014 13:54:20.272070   25306 main.go:141] libmachine: (ha-450021)   </cpu>
	I1014 13:54:20.272081   25306 main.go:141] libmachine: (ha-450021)   <os>
	I1014 13:54:20.272089   25306 main.go:141] libmachine: (ha-450021)     <type>hvm</type>
	I1014 13:54:20.272100   25306 main.go:141] libmachine: (ha-450021)     <boot dev='cdrom'/>
	I1014 13:54:20.272132   25306 main.go:141] libmachine: (ha-450021)     <boot dev='hd'/>
	I1014 13:54:20.272144   25306 main.go:141] libmachine: (ha-450021)     <bootmenu enable='no'/>
	I1014 13:54:20.272150   25306 main.go:141] libmachine: (ha-450021)   </os>
	I1014 13:54:20.272158   25306 main.go:141] libmachine: (ha-450021)   <devices>
	I1014 13:54:20.272173   25306 main.go:141] libmachine: (ha-450021)     <disk type='file' device='cdrom'>
	I1014 13:54:20.272188   25306 main.go:141] libmachine: (ha-450021)       <source file='/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/boot2docker.iso'/>
	I1014 13:54:20.272198   25306 main.go:141] libmachine: (ha-450021)       <target dev='hdc' bus='scsi'/>
	I1014 13:54:20.272208   25306 main.go:141] libmachine: (ha-450021)       <readonly/>
	I1014 13:54:20.272217   25306 main.go:141] libmachine: (ha-450021)     </disk>
	I1014 13:54:20.272224   25306 main.go:141] libmachine: (ha-450021)     <disk type='file' device='disk'>
	I1014 13:54:20.272233   25306 main.go:141] libmachine: (ha-450021)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1014 13:54:20.272252   25306 main.go:141] libmachine: (ha-450021)       <source file='/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/ha-450021.rawdisk'/>
	I1014 13:54:20.272267   25306 main.go:141] libmachine: (ha-450021)       <target dev='hda' bus='virtio'/>
	I1014 13:54:20.272277   25306 main.go:141] libmachine: (ha-450021)     </disk>
	I1014 13:54:20.272287   25306 main.go:141] libmachine: (ha-450021)     <interface type='network'>
	I1014 13:54:20.272303   25306 main.go:141] libmachine: (ha-450021)       <source network='mk-ha-450021'/>
	I1014 13:54:20.272315   25306 main.go:141] libmachine: (ha-450021)       <model type='virtio'/>
	I1014 13:54:20.272323   25306 main.go:141] libmachine: (ha-450021)     </interface>
	I1014 13:54:20.272332   25306 main.go:141] libmachine: (ha-450021)     <interface type='network'>
	I1014 13:54:20.272356   25306 main.go:141] libmachine: (ha-450021)       <source network='default'/>
	I1014 13:54:20.272378   25306 main.go:141] libmachine: (ha-450021)       <model type='virtio'/>
	I1014 13:54:20.272390   25306 main.go:141] libmachine: (ha-450021)     </interface>
	I1014 13:54:20.272397   25306 main.go:141] libmachine: (ha-450021)     <serial type='pty'>
	I1014 13:54:20.272402   25306 main.go:141] libmachine: (ha-450021)       <target port='0'/>
	I1014 13:54:20.272409   25306 main.go:141] libmachine: (ha-450021)     </serial>
	I1014 13:54:20.272414   25306 main.go:141] libmachine: (ha-450021)     <console type='pty'>
	I1014 13:54:20.272421   25306 main.go:141] libmachine: (ha-450021)       <target type='serial' port='0'/>
	I1014 13:54:20.272426   25306 main.go:141] libmachine: (ha-450021)     </console>
	I1014 13:54:20.272433   25306 main.go:141] libmachine: (ha-450021)     <rng model='virtio'>
	I1014 13:54:20.272442   25306 main.go:141] libmachine: (ha-450021)       <backend model='random'>/dev/random</backend>
	I1014 13:54:20.272449   25306 main.go:141] libmachine: (ha-450021)     </rng>
	I1014 13:54:20.272464   25306 main.go:141] libmachine: (ha-450021)     
	I1014 13:54:20.272479   25306 main.go:141] libmachine: (ha-450021)     
	I1014 13:54:20.272490   25306 main.go:141] libmachine: (ha-450021)   </devices>
	I1014 13:54:20.272499   25306 main.go:141] libmachine: (ha-450021) </domain>
	I1014 13:54:20.272508   25306 main.go:141] libmachine: (ha-450021) 
	I1014 13:54:20.276743   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:57:d6:54 in network default
	I1014 13:54:20.277233   25306 main.go:141] libmachine: (ha-450021) Ensuring networks are active...
	I1014 13:54:20.277256   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:20.277849   25306 main.go:141] libmachine: (ha-450021) Ensuring network default is active
	I1014 13:54:20.278100   25306 main.go:141] libmachine: (ha-450021) Ensuring network mk-ha-450021 is active
	I1014 13:54:20.278557   25306 main.go:141] libmachine: (ha-450021) Getting domain xml...
	I1014 13:54:20.279179   25306 main.go:141] libmachine: (ha-450021) Creating domain...
	I1014 13:54:21.462335   25306 main.go:141] libmachine: (ha-450021) Waiting to get IP...
	I1014 13:54:21.463069   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:21.463429   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:21.463469   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:21.463416   25330 retry.go:31] will retry after 252.896893ms: waiting for machine to come up
	I1014 13:54:21.717838   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:21.718276   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:21.718307   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:21.718253   25330 retry.go:31] will retry after 323.417298ms: waiting for machine to come up
	I1014 13:54:22.043653   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:22.044089   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:22.044113   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:22.044049   25330 retry.go:31] will retry after 429.247039ms: waiting for machine to come up
	I1014 13:54:22.474550   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:22.475007   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:22.475032   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:22.474972   25330 retry.go:31] will retry after 584.602082ms: waiting for machine to come up
	I1014 13:54:23.060636   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:23.061070   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:23.061096   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:23.061025   25330 retry.go:31] will retry after 757.618183ms: waiting for machine to come up
	I1014 13:54:23.819839   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:23.820349   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:23.820388   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:23.820305   25330 retry.go:31] will retry after 770.363721ms: waiting for machine to come up
	I1014 13:54:24.592151   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:24.592528   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:24.592563   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:24.592475   25330 retry.go:31] will retry after 746.543201ms: waiting for machine to come up
	I1014 13:54:25.340318   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:25.340826   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:25.340855   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:25.340782   25330 retry.go:31] will retry after 1.064448623s: waiting for machine to come up
	I1014 13:54:26.407039   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:26.407396   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:26.407443   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:26.407341   25330 retry.go:31] will retry after 1.702825811s: waiting for machine to come up
	I1014 13:54:28.112412   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:28.112812   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:28.112833   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:28.112771   25330 retry.go:31] will retry after 2.323768802s: waiting for machine to come up
	I1014 13:54:30.438077   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:30.438423   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:30.438463   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:30.438389   25330 retry.go:31] will retry after 2.882558658s: waiting for machine to come up
	I1014 13:54:33.324506   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:33.324987   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:33.325011   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:33.324949   25330 retry.go:31] will retry after 3.489582892s: waiting for machine to come up
	I1014 13:54:36.817112   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:36.817504   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:36.817523   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:36.817476   25330 retry.go:31] will retry after 4.118141928s: waiting for machine to come up
	I1014 13:54:40.937526   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:40.938020   25306 main.go:141] libmachine: (ha-450021) Found IP for machine: 192.168.39.176
	I1014 13:54:40.938039   25306 main.go:141] libmachine: (ha-450021) Reserving static IP address...
	I1014 13:54:40.938070   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has current primary IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:40.938454   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find host DHCP lease matching {name: "ha-450021", mac: "52:54:00:a1:20:5f", ip: "192.168.39.176"} in network mk-ha-450021
	I1014 13:54:41.006419   25306 main.go:141] libmachine: (ha-450021) DBG | Getting to WaitForSSH function...
	I1014 13:54:41.006450   25306 main.go:141] libmachine: (ha-450021) Reserved static IP address: 192.168.39.176
	I1014 13:54:41.006463   25306 main.go:141] libmachine: (ha-450021) Waiting for SSH to be available...
	I1014 13:54:41.008964   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.009322   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:41.009350   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.009443   25306 main.go:141] libmachine: (ha-450021) DBG | Using SSH client type: external
	I1014 13:54:41.009470   25306 main.go:141] libmachine: (ha-450021) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa (-rw-------)
	I1014 13:54:41.009582   25306 main.go:141] libmachine: (ha-450021) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.176 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 13:54:41.009598   25306 main.go:141] libmachine: (ha-450021) DBG | About to run SSH command:
	I1014 13:54:41.009610   25306 main.go:141] libmachine: (ha-450021) DBG | exit 0
	I1014 13:54:41.138539   25306 main.go:141] libmachine: (ha-450021) DBG | SSH cmd err, output: <nil>: 
	I1014 13:54:41.138806   25306 main.go:141] libmachine: (ha-450021) KVM machine creation complete!
	I1014 13:54:41.139099   25306 main.go:141] libmachine: (ha-450021) Calling .GetConfigRaw
	I1014 13:54:41.139669   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:54:41.139826   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:54:41.139970   25306 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1014 13:54:41.139983   25306 main.go:141] libmachine: (ha-450021) Calling .GetState
	I1014 13:54:41.141211   25306 main.go:141] libmachine: Detecting operating system of created instance...
	I1014 13:54:41.141221   25306 main.go:141] libmachine: Waiting for SSH to be available...
	I1014 13:54:41.141226   25306 main.go:141] libmachine: Getting to WaitForSSH function...
	I1014 13:54:41.141232   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:41.143400   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.143673   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:41.143693   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.143898   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:54:41.144069   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:41.144217   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:41.144390   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:54:41.144570   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:54:41.144741   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1014 13:54:41.144750   25306 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1014 13:54:41.257764   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 13:54:41.257787   25306 main.go:141] libmachine: Detecting the provisioner...
	I1014 13:54:41.257794   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:41.260355   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.260721   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:41.260755   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.260886   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:54:41.261058   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:41.261185   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:41.261349   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:54:41.261568   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:54:41.261770   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1014 13:54:41.261781   25306 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1014 13:54:41.387334   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1014 13:54:41.387407   25306 main.go:141] libmachine: found compatible host: buildroot
	I1014 13:54:41.387415   25306 main.go:141] libmachine: Provisioning with buildroot...
	I1014 13:54:41.387428   25306 main.go:141] libmachine: (ha-450021) Calling .GetMachineName
	I1014 13:54:41.387694   25306 buildroot.go:166] provisioning hostname "ha-450021"
	I1014 13:54:41.387742   25306 main.go:141] libmachine: (ha-450021) Calling .GetMachineName
	I1014 13:54:41.387887   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:41.390287   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.390677   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:41.390702   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.390836   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:54:41.391004   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:41.391122   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:41.391234   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:54:41.391358   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:54:41.391508   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1014 13:54:41.391518   25306 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-450021 && echo "ha-450021" | sudo tee /etc/hostname
	I1014 13:54:41.517186   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-450021
	
	I1014 13:54:41.517216   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:41.520093   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.520451   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:41.520480   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.520651   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:54:41.520827   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:41.520970   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:41.521077   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:54:41.521209   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:54:41.521391   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1014 13:54:41.521405   25306 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-450021' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-450021/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-450021' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 13:54:41.643685   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 13:54:41.643709   25306 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 13:54:41.643742   25306 buildroot.go:174] setting up certificates
	I1014 13:54:41.643754   25306 provision.go:84] configureAuth start
	I1014 13:54:41.643778   25306 main.go:141] libmachine: (ha-450021) Calling .GetMachineName
	I1014 13:54:41.644050   25306 main.go:141] libmachine: (ha-450021) Calling .GetIP
	I1014 13:54:41.646478   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.646878   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:41.646897   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.647059   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:41.648912   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.649213   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:41.649236   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.649373   25306 provision.go:143] copyHostCerts
	I1014 13:54:41.649402   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 13:54:41.649434   25306 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 13:54:41.649453   25306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 13:54:41.649515   25306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 13:54:41.649594   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 13:54:41.649617   25306 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 13:54:41.649623   25306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 13:54:41.649649   25306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 13:54:41.649688   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 13:54:41.649704   25306 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 13:54:41.649710   25306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 13:54:41.649730   25306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 13:54:41.649772   25306 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.ha-450021 san=[127.0.0.1 192.168.39.176 ha-450021 localhost minikube]
	I1014 13:54:41.997744   25306 provision.go:177] copyRemoteCerts
	I1014 13:54:41.997799   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 13:54:41.997817   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:42.000612   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.000903   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:42.000935   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.001075   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:54:42.001266   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:42.001429   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:54:42.001565   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 13:54:42.088827   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 13:54:42.088897   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 13:54:42.116095   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 13:54:42.116160   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 13:54:42.142757   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 13:54:42.142813   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1014 13:54:42.169537   25306 provision.go:87] duration metric: took 525.766906ms to configureAuth
	I1014 13:54:42.169566   25306 buildroot.go:189] setting minikube options for container-runtime
	I1014 13:54:42.169754   25306 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:54:42.169842   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:42.173229   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.174055   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:42.174080   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.174242   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:54:42.174429   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:42.174574   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:42.174715   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:54:42.174880   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:54:42.175029   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1014 13:54:42.175043   25306 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 13:54:42.406341   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 13:54:42.406376   25306 main.go:141] libmachine: Checking connection to Docker...
	I1014 13:54:42.406388   25306 main.go:141] libmachine: (ha-450021) Calling .GetURL
	I1014 13:54:42.407812   25306 main.go:141] libmachine: (ha-450021) DBG | Using libvirt version 6000000
	I1014 13:54:42.409824   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.410126   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:42.410157   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.410300   25306 main.go:141] libmachine: Docker is up and running!
	I1014 13:54:42.410319   25306 main.go:141] libmachine: Reticulating splines...
	I1014 13:54:42.410327   25306 client.go:171] duration metric: took 22.508934376s to LocalClient.Create
	I1014 13:54:42.410349   25306 start.go:167] duration metric: took 22.50900119s to libmachine.API.Create "ha-450021"
	I1014 13:54:42.410361   25306 start.go:293] postStartSetup for "ha-450021" (driver="kvm2")
	I1014 13:54:42.410370   25306 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 13:54:42.410386   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:54:42.410579   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 13:54:42.410619   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:42.412494   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.412776   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:42.412801   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.412917   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:54:42.413098   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:42.413204   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:54:42.413344   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 13:54:42.501187   25306 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 13:54:42.505548   25306 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 13:54:42.505573   25306 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 13:54:42.505640   25306 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 13:54:42.505739   25306 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 13:54:42.505751   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> /etc/ssl/certs/150232.pem
	I1014 13:54:42.505871   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 13:54:42.515100   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 13:54:42.540037   25306 start.go:296] duration metric: took 129.664961ms for postStartSetup
	I1014 13:54:42.540090   25306 main.go:141] libmachine: (ha-450021) Calling .GetConfigRaw
	I1014 13:54:42.540652   25306 main.go:141] libmachine: (ha-450021) Calling .GetIP
	I1014 13:54:42.543542   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.543870   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:42.543893   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.544115   25306 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/config.json ...
	I1014 13:54:42.544316   25306 start.go:128] duration metric: took 22.661278968s to createHost
	I1014 13:54:42.544340   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:42.546241   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.546584   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:42.546619   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.546735   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:54:42.546887   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:42.547016   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:42.547115   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:54:42.547241   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:54:42.547400   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1014 13:54:42.547410   25306 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 13:54:42.659258   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728914082.633821014
	
	I1014 13:54:42.659276   25306 fix.go:216] guest clock: 1728914082.633821014
	I1014 13:54:42.659283   25306 fix.go:229] Guest: 2024-10-14 13:54:42.633821014 +0000 UTC Remote: 2024-10-14 13:54:42.544328107 +0000 UTC m=+22.768041164 (delta=89.492907ms)
	I1014 13:54:42.659308   25306 fix.go:200] guest clock delta is within tolerance: 89.492907ms
	I1014 13:54:42.659315   25306 start.go:83] releasing machines lock for "ha-450021", held for 22.776339529s
	I1014 13:54:42.659340   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:54:42.659634   25306 main.go:141] libmachine: (ha-450021) Calling .GetIP
	I1014 13:54:42.662263   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.662566   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:42.662590   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.662762   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:54:42.663245   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:54:42.663382   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:54:42.663435   25306 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 13:54:42.663485   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:42.663584   25306 ssh_runner.go:195] Run: cat /version.json
	I1014 13:54:42.663609   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:42.665952   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.666140   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.666285   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:42.666310   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.666455   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:42.666478   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.666495   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:54:42.666715   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:54:42.666742   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:42.666851   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:54:42.666858   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:42.667031   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:54:42.667026   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 13:54:42.667128   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 13:54:42.747369   25306 ssh_runner.go:195] Run: systemctl --version
	I1014 13:54:42.781149   25306 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 13:54:42.939239   25306 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 13:54:42.945827   25306 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 13:54:42.945908   25306 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 13:54:42.961868   25306 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 13:54:42.961898   25306 start.go:495] detecting cgroup driver to use...
	I1014 13:54:42.961965   25306 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 13:54:42.979523   25306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 13:54:42.994309   25306 docker.go:217] disabling cri-docker service (if available) ...
	I1014 13:54:42.994364   25306 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 13:54:43.009231   25306 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 13:54:43.023792   25306 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 13:54:43.139525   25306 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 13:54:43.303272   25306 docker.go:233] disabling docker service ...
	I1014 13:54:43.303333   25306 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 13:54:43.318132   25306 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 13:54:43.331650   25306 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 13:54:43.447799   25306 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 13:54:43.574532   25306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 13:54:43.588882   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 13:54:43.606788   25306 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 13:54:43.606849   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:54:43.617065   25306 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 13:54:43.617138   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:54:43.627421   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:54:43.637692   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:54:43.648944   25306 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 13:54:43.659223   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:54:43.669296   25306 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:54:43.686887   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:54:43.697925   25306 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 13:54:43.707402   25306 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 13:54:43.707476   25306 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 13:54:43.720091   25306 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 13:54:43.729667   25306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:54:43.845781   25306 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 13:54:43.932782   25306 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 13:54:43.932868   25306 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 13:54:43.938172   25306 start.go:563] Will wait 60s for crictl version
	I1014 13:54:43.938228   25306 ssh_runner.go:195] Run: which crictl
	I1014 13:54:43.941774   25306 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 13:54:43.979317   25306 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 13:54:43.979415   25306 ssh_runner.go:195] Run: crio --version
	I1014 13:54:44.006952   25306 ssh_runner.go:195] Run: crio --version
	I1014 13:54:44.038472   25306 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1014 13:54:44.039762   25306 main.go:141] libmachine: (ha-450021) Calling .GetIP
	I1014 13:54:44.042304   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:44.042634   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:44.042661   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:44.042831   25306 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1014 13:54:44.046611   25306 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 13:54:44.059369   25306 kubeadm.go:883] updating cluster {Name:ha-450021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 13:54:44.059491   25306 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 13:54:44.059551   25306 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 13:54:44.090998   25306 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1014 13:54:44.091053   25306 ssh_runner.go:195] Run: which lz4
	I1014 13:54:44.094706   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1014 13:54:44.094776   25306 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 13:54:44.098775   25306 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 13:54:44.098800   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1014 13:54:45.421436   25306 crio.go:462] duration metric: took 1.326676583s to copy over tarball
	I1014 13:54:45.421513   25306 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 13:54:47.393636   25306 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.97209405s)
	I1014 13:54:47.393677   25306 crio.go:469] duration metric: took 1.97220742s to extract the tarball
	I1014 13:54:47.393687   25306 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 13:54:47.430848   25306 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 13:54:47.475174   25306 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 13:54:47.475197   25306 cache_images.go:84] Images are preloaded, skipping loading
	I1014 13:54:47.475204   25306 kubeadm.go:934] updating node { 192.168.39.176 8443 v1.31.1 crio true true} ...
	I1014 13:54:47.475299   25306 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-450021 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.176
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 13:54:47.475375   25306 ssh_runner.go:195] Run: crio config
	I1014 13:54:47.520162   25306 cni.go:84] Creating CNI manager for ""
	I1014 13:54:47.520183   25306 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 13:54:47.520192   25306 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 13:54:47.520214   25306 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.176 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-450021 NodeName:ha-450021 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.176"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.176 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 13:54:47.520316   25306 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.176
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-450021"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.176"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.176"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 13:54:47.520338   25306 kube-vip.go:115] generating kube-vip config ...
	I1014 13:54:47.520375   25306 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1014 13:54:47.537448   25306 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1014 13:54:47.537535   25306 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1014 13:54:47.537577   25306 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 13:54:47.551104   25306 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 13:54:47.551176   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1014 13:54:47.562687   25306 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1014 13:54:47.578926   25306 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 13:54:47.594827   25306 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1014 13:54:47.610693   25306 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1014 13:54:47.626695   25306 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1014 13:54:47.630338   25306 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 13:54:47.642280   25306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:54:47.756050   25306 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 13:54:47.773461   25306 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021 for IP: 192.168.39.176
	I1014 13:54:47.773484   25306 certs.go:194] generating shared ca certs ...
	I1014 13:54:47.773503   25306 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:54:47.773705   25306 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 13:54:47.773829   25306 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 13:54:47.773848   25306 certs.go:256] generating profile certs ...
	I1014 13:54:47.773913   25306 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.key
	I1014 13:54:47.773930   25306 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.crt with IP's: []
	I1014 13:54:48.113501   25306 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.crt ...
	I1014 13:54:48.113531   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.crt: {Name:mkbf9820119866d476b6914d2148d200b676c657 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:54:48.113715   25306 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.key ...
	I1014 13:54:48.113731   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.key: {Name:mk7d74bdc4633efc50efa47cc87ab000404cd20c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:54:48.113831   25306 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.1083e180
	I1014 13:54:48.113850   25306 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.1083e180 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.176 192.168.39.254]
	I1014 13:54:48.267925   25306 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.1083e180 ...
	I1014 13:54:48.267957   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.1083e180: {Name:mkd19ba2c223d25d9a0673db3befa3152f7a2c70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:54:48.268143   25306 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.1083e180 ...
	I1014 13:54:48.268160   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.1083e180: {Name:mkd725fc60a32f585bc691d5e3dd373c3c488835 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:54:48.268262   25306 certs.go:381] copying /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.1083e180 -> /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt
	I1014 13:54:48.268370   25306 certs.go:385] copying /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.1083e180 -> /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key
	I1014 13:54:48.268460   25306 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key
	I1014 13:54:48.268481   25306 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.crt with IP's: []
	I1014 13:54:48.434515   25306 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.crt ...
	I1014 13:54:48.434539   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.crt: {Name:mk37070511c0eff0f5c442e93060bbaddee85673 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:54:48.434689   25306 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key ...
	I1014 13:54:48.434700   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key: {Name:mk4252d17e842b88b135b952004ba8203bf67100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:54:48.434774   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 13:54:48.434791   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 13:54:48.434801   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 13:54:48.434813   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 13:54:48.434823   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 13:54:48.434833   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 13:54:48.434843   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 13:54:48.434854   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 13:54:48.434895   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 13:54:48.434936   25306 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 13:54:48.434945   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 13:54:48.434969   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 13:54:48.434990   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 13:54:48.435010   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 13:54:48.435044   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 13:54:48.435072   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> /usr/share/ca-certificates/150232.pem
	I1014 13:54:48.435084   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:54:48.435096   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem -> /usr/share/ca-certificates/15023.pem
	I1014 13:54:48.436322   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 13:54:48.461913   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 13:54:48.484404   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 13:54:48.506815   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 13:54:48.532871   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 13:54:48.555023   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 13:54:48.577102   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 13:54:48.599841   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 13:54:48.622100   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 13:54:48.644244   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 13:54:48.666067   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 13:54:48.688272   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 13:54:48.704452   25306 ssh_runner.go:195] Run: openssl version
	I1014 13:54:48.709950   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 13:54:48.720462   25306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:54:48.724736   25306 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:54:48.724786   25306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:54:48.730515   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 13:54:48.740926   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 13:54:48.751163   25306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 13:54:48.755136   25306 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 13:54:48.755173   25306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 13:54:48.760601   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 13:54:48.771042   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 13:54:48.781517   25306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 13:54:48.785721   25306 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 13:54:48.785757   25306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 13:54:48.791039   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 13:54:48.801295   25306 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 13:54:48.805300   25306 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 13:54:48.805353   25306 kubeadm.go:392] StartCluster: {Name:ha-450021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 13:54:48.805425   25306 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 13:54:48.805474   25306 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 13:54:48.846958   25306 cri.go:89] found id: ""
	I1014 13:54:48.847017   25306 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 13:54:48.856997   25306 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 13:54:48.866515   25306 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 13:54:48.876223   25306 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 13:54:48.876241   25306 kubeadm.go:157] found existing configuration files:
	
	I1014 13:54:48.876288   25306 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 13:54:48.885144   25306 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 13:54:48.885195   25306 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 13:54:48.894355   25306 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 13:54:48.902957   25306 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 13:54:48.903009   25306 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 13:54:48.912153   25306 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 13:54:48.921701   25306 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 13:54:48.921759   25306 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 13:54:48.931128   25306 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 13:54:48.939839   25306 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 13:54:48.939871   25306 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 13:54:48.948948   25306 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 13:54:49.168356   25306 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 13:55:00.103864   25306 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1014 13:55:00.103941   25306 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 13:55:00.104029   25306 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 13:55:00.104143   25306 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 13:55:00.104280   25306 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 13:55:00.104375   25306 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 13:55:00.106272   25306 out.go:235]   - Generating certificates and keys ...
	I1014 13:55:00.106362   25306 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 13:55:00.106429   25306 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 13:55:00.106511   25306 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 13:55:00.106612   25306 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1014 13:55:00.106709   25306 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1014 13:55:00.106793   25306 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1014 13:55:00.106864   25306 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1014 13:55:00.107022   25306 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-450021 localhost] and IPs [192.168.39.176 127.0.0.1 ::1]
	I1014 13:55:00.107089   25306 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1014 13:55:00.107238   25306 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-450021 localhost] and IPs [192.168.39.176 127.0.0.1 ::1]
	I1014 13:55:00.107331   25306 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 13:55:00.107416   25306 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 13:55:00.107496   25306 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1014 13:55:00.107576   25306 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 13:55:00.107656   25306 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 13:55:00.107736   25306 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 13:55:00.107811   25306 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 13:55:00.107905   25306 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 13:55:00.107957   25306 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 13:55:00.108061   25306 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 13:55:00.108162   25306 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 13:55:00.109922   25306 out.go:235]   - Booting up control plane ...
	I1014 13:55:00.110034   25306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 13:55:00.110132   25306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 13:55:00.110214   25306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 13:55:00.110345   25306 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 13:55:00.110449   25306 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 13:55:00.110494   25306 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 13:55:00.110622   25306 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 13:55:00.110705   25306 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 13:55:00.110755   25306 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002174478s
	I1014 13:55:00.110843   25306 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1014 13:55:00.110911   25306 kubeadm.go:310] [api-check] The API server is healthy after 5.813875513s
	I1014 13:55:00.111034   25306 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 13:55:00.111171   25306 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 13:55:00.111231   25306 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 13:55:00.111391   25306 kubeadm.go:310] [mark-control-plane] Marking the node ha-450021 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 13:55:00.111441   25306 kubeadm.go:310] [bootstrap-token] Using token: e8eaxr.5trfuyfb27hv7e11
	I1014 13:55:00.112896   25306 out.go:235]   - Configuring RBAC rules ...
	I1014 13:55:00.113020   25306 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 13:55:00.113086   25306 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 13:55:00.113219   25306 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 13:55:00.113369   25306 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 13:55:00.113527   25306 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 13:55:00.113646   25306 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 13:55:00.113778   25306 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 13:55:00.113819   25306 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1014 13:55:00.113862   25306 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1014 13:55:00.113868   25306 kubeadm.go:310] 
	I1014 13:55:00.113922   25306 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1014 13:55:00.113928   25306 kubeadm.go:310] 
	I1014 13:55:00.113997   25306 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1014 13:55:00.114004   25306 kubeadm.go:310] 
	I1014 13:55:00.114048   25306 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1014 13:55:00.114129   25306 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 13:55:00.114180   25306 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 13:55:00.114188   25306 kubeadm.go:310] 
	I1014 13:55:00.114245   25306 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1014 13:55:00.114263   25306 kubeadm.go:310] 
	I1014 13:55:00.114330   25306 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 13:55:00.114341   25306 kubeadm.go:310] 
	I1014 13:55:00.114411   25306 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1014 13:55:00.114513   25306 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 13:55:00.114572   25306 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 13:55:00.114578   25306 kubeadm.go:310] 
	I1014 13:55:00.114693   25306 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 13:55:00.114784   25306 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1014 13:55:00.114793   25306 kubeadm.go:310] 
	I1014 13:55:00.114891   25306 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token e8eaxr.5trfuyfb27hv7e11 \
	I1014 13:55:00.114977   25306 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 \
	I1014 13:55:00.114998   25306 kubeadm.go:310] 	--control-plane 
	I1014 13:55:00.115002   25306 kubeadm.go:310] 
	I1014 13:55:00.115074   25306 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1014 13:55:00.115080   25306 kubeadm.go:310] 
	I1014 13:55:00.115154   25306 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token e8eaxr.5trfuyfb27hv7e11 \
	I1014 13:55:00.115275   25306 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 
	I1014 13:55:00.115292   25306 cni.go:84] Creating CNI manager for ""
	I1014 13:55:00.115302   25306 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 13:55:00.117091   25306 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1014 13:55:00.118483   25306 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1014 13:55:00.124368   25306 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1014 13:55:00.124388   25306 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1014 13:55:00.145958   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1014 13:55:00.528887   25306 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 13:55:00.528967   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:55:00.528987   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-450021 minikube.k8s.io/updated_at=2024_10_14T13_55_00_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=ha-450021 minikube.k8s.io/primary=true
	I1014 13:55:00.543744   25306 ops.go:34] apiserver oom_adj: -16
	I1014 13:55:00.662237   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:55:01.162275   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:55:01.662698   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:55:02.163027   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:55:02.662525   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:55:03.162972   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:55:03.662524   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:55:03.751160   25306 kubeadm.go:1113] duration metric: took 3.222260966s to wait for elevateKubeSystemPrivileges
	I1014 13:55:03.751200   25306 kubeadm.go:394] duration metric: took 14.945849765s to StartCluster
	I1014 13:55:03.751222   25306 settings.go:142] acquiring lock: {Name:mk9f6f6b9dc8c3435472077cbb9091b8e648d1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:55:03.751304   25306 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 13:55:03.752000   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/kubeconfig: {Name:mk17dd47b52fd7a8ee17f563c35b22ef1b7788f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:55:03.752256   25306 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 13:55:03.752277   25306 start.go:241] waiting for startup goroutines ...
	I1014 13:55:03.752262   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1014 13:55:03.752277   25306 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 13:55:03.752370   25306 addons.go:69] Setting storage-provisioner=true in profile "ha-450021"
	I1014 13:55:03.752388   25306 addons.go:234] Setting addon storage-provisioner=true in "ha-450021"
	I1014 13:55:03.752407   25306 addons.go:69] Setting default-storageclass=true in profile "ha-450021"
	I1014 13:55:03.752422   25306 host.go:66] Checking if "ha-450021" exists ...
	I1014 13:55:03.752435   25306 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:55:03.752440   25306 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-450021"
	I1014 13:55:03.752851   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:55:03.752853   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:55:03.752892   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:55:03.752907   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:55:03.768120   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40745
	I1014 13:55:03.768294   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36817
	I1014 13:55:03.768559   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:55:03.768773   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:55:03.769132   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:55:03.769156   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:55:03.769285   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:55:03.769308   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:55:03.769488   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:55:03.769594   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:55:03.769745   25306 main.go:141] libmachine: (ha-450021) Calling .GetState
	I1014 13:55:03.770040   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:55:03.770082   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:55:03.771657   25306 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 13:55:03.771868   25306 kapi.go:59] client config for ha-450021: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.crt", KeyFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.key", CAFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432aa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 13:55:03.772274   25306 cert_rotation.go:140] Starting client certificate rotation controller
	I1014 13:55:03.772426   25306 addons.go:234] Setting addon default-storageclass=true in "ha-450021"
	I1014 13:55:03.772458   25306 host.go:66] Checking if "ha-450021" exists ...
	I1014 13:55:03.772689   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:55:03.772720   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:55:03.785301   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39049
	I1014 13:55:03.785754   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:55:03.786274   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:55:03.786301   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:55:03.786653   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:55:03.786685   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37795
	I1014 13:55:03.786852   25306 main.go:141] libmachine: (ha-450021) Calling .GetState
	I1014 13:55:03.787134   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:55:03.787596   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:55:03.787621   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:55:03.787924   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:55:03.788463   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:55:03.788507   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:55:03.788527   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:55:03.790666   25306 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 13:55:03.791877   25306 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 13:55:03.791892   25306 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 13:55:03.791905   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:55:03.794484   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:55:03.794853   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:55:03.794881   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:55:03.794998   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:55:03.795150   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:55:03.795298   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:55:03.795425   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 13:55:03.804082   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36387
	I1014 13:55:03.804475   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:55:03.804871   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:55:03.804893   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:55:03.805154   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:55:03.805296   25306 main.go:141] libmachine: (ha-450021) Calling .GetState
	I1014 13:55:03.806617   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:55:03.806811   25306 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 13:55:03.806824   25306 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 13:55:03.806838   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:55:03.809334   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:55:03.809735   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:55:03.809764   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:55:03.809917   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:55:03.810083   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:55:03.810214   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:55:03.810346   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 13:55:03.916382   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1014 13:55:03.970762   25306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 13:55:04.045876   25306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 13:55:04.562851   25306 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1014 13:55:04.828250   25306 main.go:141] libmachine: Making call to close driver server
	I1014 13:55:04.828267   25306 main.go:141] libmachine: Making call to close driver server
	I1014 13:55:04.828285   25306 main.go:141] libmachine: (ha-450021) Calling .Close
	I1014 13:55:04.828272   25306 main.go:141] libmachine: (ha-450021) Calling .Close
	I1014 13:55:04.828566   25306 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:55:04.828578   25306 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:55:04.828586   25306 main.go:141] libmachine: Making call to close driver server
	I1014 13:55:04.828592   25306 main.go:141] libmachine: (ha-450021) Calling .Close
	I1014 13:55:04.828628   25306 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:55:04.828642   25306 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:55:04.828650   25306 main.go:141] libmachine: Making call to close driver server
	I1014 13:55:04.828657   25306 main.go:141] libmachine: (ha-450021) Calling .Close
	I1014 13:55:04.828760   25306 main.go:141] libmachine: (ha-450021) DBG | Closing plugin on server side
	I1014 13:55:04.828781   25306 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:55:04.828790   25306 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:55:04.830286   25306 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:55:04.830303   25306 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:55:04.830318   25306 main.go:141] libmachine: (ha-450021) DBG | Closing plugin on server side
	I1014 13:55:04.830357   25306 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 13:55:04.830377   25306 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 13:55:04.830467   25306 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1014 13:55:04.830477   25306 round_trippers.go:469] Request Headers:
	I1014 13:55:04.830487   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:55:04.830500   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:55:04.851944   25306 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I1014 13:55:04.852525   25306 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1014 13:55:04.852541   25306 round_trippers.go:469] Request Headers:
	I1014 13:55:04.852549   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:55:04.852558   25306 round_trippers.go:473]     Content-Type: application/json
	I1014 13:55:04.852569   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:55:04.860873   25306 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1014 13:55:04.863865   25306 main.go:141] libmachine: Making call to close driver server
	I1014 13:55:04.863890   25306 main.go:141] libmachine: (ha-450021) Calling .Close
	I1014 13:55:04.864194   25306 main.go:141] libmachine: (ha-450021) DBG | Closing plugin on server side
	I1014 13:55:04.864235   25306 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:55:04.864246   25306 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:55:04.865910   25306 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1014 13:55:04.867207   25306 addons.go:510] duration metric: took 1.114927542s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1014 13:55:04.867245   25306 start.go:246] waiting for cluster config update ...
	I1014 13:55:04.867260   25306 start.go:255] writing updated cluster config ...
	I1014 13:55:04.868981   25306 out.go:201] 
	I1014 13:55:04.870358   25306 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:55:04.870432   25306 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/config.json ...
	I1014 13:55:04.871998   25306 out.go:177] * Starting "ha-450021-m02" control-plane node in "ha-450021" cluster
	I1014 13:55:04.873148   25306 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 13:55:04.873168   25306 cache.go:56] Caching tarball of preloaded images
	I1014 13:55:04.873259   25306 preload.go:172] Found /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 13:55:04.873270   25306 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1014 13:55:04.873348   25306 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/config.json ...
	I1014 13:55:04.873725   25306 start.go:360] acquireMachinesLock for ha-450021-m02: {Name:mk0b928e3a7c606d1470b2064477a22c5e59969d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 13:55:04.873773   25306 start.go:364] duration metric: took 27.606µs to acquireMachinesLock for "ha-450021-m02"
	I1014 13:55:04.873797   25306 start.go:93] Provisioning new machine with config: &{Name:ha-450021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 13:55:04.873856   25306 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1014 13:55:04.875450   25306 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 13:55:04.875534   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:55:04.875571   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:55:04.891858   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40523
	I1014 13:55:04.892468   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:55:04.893080   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:55:04.893101   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:55:04.893416   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:55:04.893639   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetMachineName
	I1014 13:55:04.893812   25306 main.go:141] libmachine: (ha-450021-m02) Calling .DriverName
	I1014 13:55:04.894009   25306 start.go:159] libmachine.API.Create for "ha-450021" (driver="kvm2")
	I1014 13:55:04.894037   25306 client.go:168] LocalClient.Create starting
	I1014 13:55:04.894069   25306 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem
	I1014 13:55:04.894114   25306 main.go:141] libmachine: Decoding PEM data...
	I1014 13:55:04.894134   25306 main.go:141] libmachine: Parsing certificate...
	I1014 13:55:04.894211   25306 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem
	I1014 13:55:04.894240   25306 main.go:141] libmachine: Decoding PEM data...
	I1014 13:55:04.894258   25306 main.go:141] libmachine: Parsing certificate...
	I1014 13:55:04.894285   25306 main.go:141] libmachine: Running pre-create checks...
	I1014 13:55:04.894306   25306 main.go:141] libmachine: (ha-450021-m02) Calling .PreCreateCheck
	I1014 13:55:04.894485   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetConfigRaw
	I1014 13:55:04.894889   25306 main.go:141] libmachine: Creating machine...
	I1014 13:55:04.894903   25306 main.go:141] libmachine: (ha-450021-m02) Calling .Create
	I1014 13:55:04.895072   25306 main.go:141] libmachine: (ha-450021-m02) Creating KVM machine...
	I1014 13:55:04.896272   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found existing default KVM network
	I1014 13:55:04.896429   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found existing private KVM network mk-ha-450021
	I1014 13:55:04.896566   25306 main.go:141] libmachine: (ha-450021-m02) Setting up store path in /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02 ...
	I1014 13:55:04.896592   25306 main.go:141] libmachine: (ha-450021-m02) Building disk image from file:///home/jenkins/minikube-integration/19790-7836/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1014 13:55:04.896679   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:04.896574   25672 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 13:55:04.896767   25306 main.go:141] libmachine: (ha-450021-m02) Downloading /home/jenkins/minikube-integration/19790-7836/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19790-7836/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1014 13:55:05.156236   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:05.156095   25672 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/id_rsa...
	I1014 13:55:05.229289   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:05.229176   25672 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/ha-450021-m02.rawdisk...
	I1014 13:55:05.229317   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Writing magic tar header
	I1014 13:55:05.229327   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Writing SSH key tar header
	I1014 13:55:05.229334   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:05.229291   25672 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02 ...
	I1014 13:55:05.229448   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02
	I1014 13:55:05.229476   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube/machines
	I1014 13:55:05.229494   25306 main.go:141] libmachine: (ha-450021-m02) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02 (perms=drwx------)
	I1014 13:55:05.229512   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 13:55:05.229525   25306 main.go:141] libmachine: (ha-450021-m02) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube/machines (perms=drwxr-xr-x)
	I1014 13:55:05.229536   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836
	I1014 13:55:05.229551   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1014 13:55:05.229562   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Checking permissions on dir: /home/jenkins
	I1014 13:55:05.229576   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Checking permissions on dir: /home
	I1014 13:55:05.229584   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Skipping /home - not owner
	I1014 13:55:05.229634   25306 main.go:141] libmachine: (ha-450021-m02) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube (perms=drwxr-xr-x)
	I1014 13:55:05.229673   25306 main.go:141] libmachine: (ha-450021-m02) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836 (perms=drwxrwxr-x)
	I1014 13:55:05.229699   25306 main.go:141] libmachine: (ha-450021-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1014 13:55:05.229714   25306 main.go:141] libmachine: (ha-450021-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1014 13:55:05.229724   25306 main.go:141] libmachine: (ha-450021-m02) Creating domain...
	I1014 13:55:05.230559   25306 main.go:141] libmachine: (ha-450021-m02) define libvirt domain using xml: 
	I1014 13:55:05.230582   25306 main.go:141] libmachine: (ha-450021-m02) <domain type='kvm'>
	I1014 13:55:05.230608   25306 main.go:141] libmachine: (ha-450021-m02)   <name>ha-450021-m02</name>
	I1014 13:55:05.230626   25306 main.go:141] libmachine: (ha-450021-m02)   <memory unit='MiB'>2200</memory>
	I1014 13:55:05.230636   25306 main.go:141] libmachine: (ha-450021-m02)   <vcpu>2</vcpu>
	I1014 13:55:05.230650   25306 main.go:141] libmachine: (ha-450021-m02)   <features>
	I1014 13:55:05.230660   25306 main.go:141] libmachine: (ha-450021-m02)     <acpi/>
	I1014 13:55:05.230666   25306 main.go:141] libmachine: (ha-450021-m02)     <apic/>
	I1014 13:55:05.230676   25306 main.go:141] libmachine: (ha-450021-m02)     <pae/>
	I1014 13:55:05.230682   25306 main.go:141] libmachine: (ha-450021-m02)     
	I1014 13:55:05.230689   25306 main.go:141] libmachine: (ha-450021-m02)   </features>
	I1014 13:55:05.230699   25306 main.go:141] libmachine: (ha-450021-m02)   <cpu mode='host-passthrough'>
	I1014 13:55:05.230706   25306 main.go:141] libmachine: (ha-450021-m02)   
	I1014 13:55:05.230711   25306 main.go:141] libmachine: (ha-450021-m02)   </cpu>
	I1014 13:55:05.230718   25306 main.go:141] libmachine: (ha-450021-m02)   <os>
	I1014 13:55:05.230728   25306 main.go:141] libmachine: (ha-450021-m02)     <type>hvm</type>
	I1014 13:55:05.230739   25306 main.go:141] libmachine: (ha-450021-m02)     <boot dev='cdrom'/>
	I1014 13:55:05.230748   25306 main.go:141] libmachine: (ha-450021-m02)     <boot dev='hd'/>
	I1014 13:55:05.230763   25306 main.go:141] libmachine: (ha-450021-m02)     <bootmenu enable='no'/>
	I1014 13:55:05.230773   25306 main.go:141] libmachine: (ha-450021-m02)   </os>
	I1014 13:55:05.230780   25306 main.go:141] libmachine: (ha-450021-m02)   <devices>
	I1014 13:55:05.230790   25306 main.go:141] libmachine: (ha-450021-m02)     <disk type='file' device='cdrom'>
	I1014 13:55:05.230819   25306 main.go:141] libmachine: (ha-450021-m02)       <source file='/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/boot2docker.iso'/>
	I1014 13:55:05.230839   25306 main.go:141] libmachine: (ha-450021-m02)       <target dev='hdc' bus='scsi'/>
	I1014 13:55:05.230847   25306 main.go:141] libmachine: (ha-450021-m02)       <readonly/>
	I1014 13:55:05.230854   25306 main.go:141] libmachine: (ha-450021-m02)     </disk>
	I1014 13:55:05.230864   25306 main.go:141] libmachine: (ha-450021-m02)     <disk type='file' device='disk'>
	I1014 13:55:05.230881   25306 main.go:141] libmachine: (ha-450021-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1014 13:55:05.230897   25306 main.go:141] libmachine: (ha-450021-m02)       <source file='/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/ha-450021-m02.rawdisk'/>
	I1014 13:55:05.230912   25306 main.go:141] libmachine: (ha-450021-m02)       <target dev='hda' bus='virtio'/>
	I1014 13:55:05.230923   25306 main.go:141] libmachine: (ha-450021-m02)     </disk>
	I1014 13:55:05.230933   25306 main.go:141] libmachine: (ha-450021-m02)     <interface type='network'>
	I1014 13:55:05.230942   25306 main.go:141] libmachine: (ha-450021-m02)       <source network='mk-ha-450021'/>
	I1014 13:55:05.230949   25306 main.go:141] libmachine: (ha-450021-m02)       <model type='virtio'/>
	I1014 13:55:05.230956   25306 main.go:141] libmachine: (ha-450021-m02)     </interface>
	I1014 13:55:05.230966   25306 main.go:141] libmachine: (ha-450021-m02)     <interface type='network'>
	I1014 13:55:05.230975   25306 main.go:141] libmachine: (ha-450021-m02)       <source network='default'/>
	I1014 13:55:05.230987   25306 main.go:141] libmachine: (ha-450021-m02)       <model type='virtio'/>
	I1014 13:55:05.230998   25306 main.go:141] libmachine: (ha-450021-m02)     </interface>
	I1014 13:55:05.231008   25306 main.go:141] libmachine: (ha-450021-m02)     <serial type='pty'>
	I1014 13:55:05.231016   25306 main.go:141] libmachine: (ha-450021-m02)       <target port='0'/>
	I1014 13:55:05.231026   25306 main.go:141] libmachine: (ha-450021-m02)     </serial>
	I1014 13:55:05.231034   25306 main.go:141] libmachine: (ha-450021-m02)     <console type='pty'>
	I1014 13:55:05.231042   25306 main.go:141] libmachine: (ha-450021-m02)       <target type='serial' port='0'/>
	I1014 13:55:05.231047   25306 main.go:141] libmachine: (ha-450021-m02)     </console>
	I1014 13:55:05.231060   25306 main.go:141] libmachine: (ha-450021-m02)     <rng model='virtio'>
	I1014 13:55:05.231073   25306 main.go:141] libmachine: (ha-450021-m02)       <backend model='random'>/dev/random</backend>
	I1014 13:55:05.231079   25306 main.go:141] libmachine: (ha-450021-m02)     </rng>
	I1014 13:55:05.231090   25306 main.go:141] libmachine: (ha-450021-m02)     
	I1014 13:55:05.231096   25306 main.go:141] libmachine: (ha-450021-m02)     
	I1014 13:55:05.231107   25306 main.go:141] libmachine: (ha-450021-m02)   </devices>
	I1014 13:55:05.231116   25306 main.go:141] libmachine: (ha-450021-m02) </domain>
	I1014 13:55:05.231125   25306 main.go:141] libmachine: (ha-450021-m02) 
	I1014 13:55:05.238505   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:39:fb:46 in network default
	I1014 13:55:05.239084   25306 main.go:141] libmachine: (ha-450021-m02) Ensuring networks are active...
	I1014 13:55:05.239109   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:05.239788   25306 main.go:141] libmachine: (ha-450021-m02) Ensuring network default is active
	I1014 13:55:05.240113   25306 main.go:141] libmachine: (ha-450021-m02) Ensuring network mk-ha-450021 is active
	I1014 13:55:05.240488   25306 main.go:141] libmachine: (ha-450021-m02) Getting domain xml...
	I1014 13:55:05.241224   25306 main.go:141] libmachine: (ha-450021-m02) Creating domain...
	I1014 13:55:06.508569   25306 main.go:141] libmachine: (ha-450021-m02) Waiting to get IP...
	I1014 13:55:06.509274   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:06.509728   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:06.509800   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:06.509721   25672 retry.go:31] will retry after 253.994001ms: waiting for machine to come up
	I1014 13:55:06.765296   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:06.765720   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:06.765754   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:06.765695   25672 retry.go:31] will retry after 330.390593ms: waiting for machine to come up
	I1014 13:55:07.097342   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:07.097779   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:07.097809   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:07.097725   25672 retry.go:31] will retry after 315.743674ms: waiting for machine to come up
	I1014 13:55:07.414954   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:07.415551   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:07.415596   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:07.415518   25672 retry.go:31] will retry after 505.396104ms: waiting for machine to come up
	I1014 13:55:07.922086   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:07.922530   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:07.922555   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:07.922518   25672 retry.go:31] will retry after 762.026701ms: waiting for machine to come up
	I1014 13:55:08.686471   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:08.686874   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:08.686903   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:08.686842   25672 retry.go:31] will retry after 891.989591ms: waiting for machine to come up
	I1014 13:55:09.580677   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:09.581174   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:09.581195   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:09.581150   25672 retry.go:31] will retry after 716.006459ms: waiting for machine to come up
	I1014 13:55:10.299036   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:10.299435   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:10.299462   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:10.299390   25672 retry.go:31] will retry after 999.038321ms: waiting for machine to come up
	I1014 13:55:11.299678   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:11.300155   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:11.300182   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:11.300092   25672 retry.go:31] will retry after 1.384319167s: waiting for machine to come up
	I1014 13:55:12.686664   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:12.687084   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:12.687130   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:12.687031   25672 retry.go:31] will retry after 1.750600606s: waiting for machine to come up
	I1014 13:55:14.439721   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:14.440157   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:14.440185   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:14.440132   25672 retry.go:31] will retry after 2.719291498s: waiting for machine to come up
	I1014 13:55:17.160916   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:17.161338   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:17.161359   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:17.161288   25672 retry.go:31] will retry after 2.934487947s: waiting for machine to come up
	I1014 13:55:20.097623   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:20.098033   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:20.098054   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:20.097994   25672 retry.go:31] will retry after 3.495468914s: waiting for machine to come up
	I1014 13:55:23.597556   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:23.598084   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:23.598105   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:23.598043   25672 retry.go:31] will retry after 4.955902252s: waiting for machine to come up
	I1014 13:55:28.555767   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:28.556335   25306 main.go:141] libmachine: (ha-450021-m02) Found IP for machine: 192.168.39.89
	I1014 13:55:28.556360   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has current primary IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:28.556369   25306 main.go:141] libmachine: (ha-450021-m02) Reserving static IP address...
	I1014 13:55:28.556652   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find host DHCP lease matching {name: "ha-450021-m02", mac: "52:54:00:51:58:78", ip: "192.168.39.89"} in network mk-ha-450021
	I1014 13:55:28.627598   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Getting to WaitForSSH function...
	I1014 13:55:28.627633   25306 main.go:141] libmachine: (ha-450021-m02) Reserved static IP address: 192.168.39.89
	I1014 13:55:28.627646   25306 main.go:141] libmachine: (ha-450021-m02) Waiting for SSH to be available...
	I1014 13:55:28.629843   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:28.630161   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021
	I1014 13:55:28.630190   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find defined IP address of network mk-ha-450021 interface with MAC address 52:54:00:51:58:78
	I1014 13:55:28.630310   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Using SSH client type: external
	I1014 13:55:28.630337   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/id_rsa (-rw-------)
	I1014 13:55:28.630368   25306 main.go:141] libmachine: (ha-450021-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 13:55:28.630381   25306 main.go:141] libmachine: (ha-450021-m02) DBG | About to run SSH command:
	I1014 13:55:28.630396   25306 main.go:141] libmachine: (ha-450021-m02) DBG | exit 0
	I1014 13:55:28.634134   25306 main.go:141] libmachine: (ha-450021-m02) DBG | SSH cmd err, output: exit status 255: 
	I1014 13:55:28.634150   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1014 13:55:28.634157   25306 main.go:141] libmachine: (ha-450021-m02) DBG | command : exit 0
	I1014 13:55:28.634162   25306 main.go:141] libmachine: (ha-450021-m02) DBG | err     : exit status 255
	I1014 13:55:28.634170   25306 main.go:141] libmachine: (ha-450021-m02) DBG | output  : 
	I1014 13:55:31.634385   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Getting to WaitForSSH function...
	I1014 13:55:31.636814   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:31.637121   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:31.637150   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:31.637249   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Using SSH client type: external
	I1014 13:55:31.637272   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/id_rsa (-rw-------)
	I1014 13:55:31.637290   25306 main.go:141] libmachine: (ha-450021-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 13:55:31.637302   25306 main.go:141] libmachine: (ha-450021-m02) DBG | About to run SSH command:
	I1014 13:55:31.637327   25306 main.go:141] libmachine: (ha-450021-m02) DBG | exit 0
	I1014 13:55:31.762693   25306 main.go:141] libmachine: (ha-450021-m02) DBG | SSH cmd err, output: <nil>: 
	I1014 13:55:31.762993   25306 main.go:141] libmachine: (ha-450021-m02) KVM machine creation complete!
	I1014 13:55:31.763308   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetConfigRaw
	I1014 13:55:31.763786   25306 main.go:141] libmachine: (ha-450021-m02) Calling .DriverName
	I1014 13:55:31.763969   25306 main.go:141] libmachine: (ha-450021-m02) Calling .DriverName
	I1014 13:55:31.764130   25306 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1014 13:55:31.764154   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetState
	I1014 13:55:31.765484   25306 main.go:141] libmachine: Detecting operating system of created instance...
	I1014 13:55:31.765498   25306 main.go:141] libmachine: Waiting for SSH to be available...
	I1014 13:55:31.765506   25306 main.go:141] libmachine: Getting to WaitForSSH function...
	I1014 13:55:31.765513   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	I1014 13:55:31.767968   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:31.768352   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:31.768386   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:31.768540   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHPort
	I1014 13:55:31.768701   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:31.768883   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:31.769050   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHUsername
	I1014 13:55:31.769231   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:55:31.769460   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1014 13:55:31.769474   25306 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1014 13:55:31.877746   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 13:55:31.877770   25306 main.go:141] libmachine: Detecting the provisioner...
	I1014 13:55:31.877779   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	I1014 13:55:31.880489   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:31.880858   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:31.880884   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:31.881034   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHPort
	I1014 13:55:31.881200   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:31.881337   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:31.881482   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHUsername
	I1014 13:55:31.881602   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:55:31.881767   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1014 13:55:31.881780   25306 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1014 13:55:31.995447   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1014 13:55:31.995515   25306 main.go:141] libmachine: found compatible host: buildroot
	I1014 13:55:31.995529   25306 main.go:141] libmachine: Provisioning with buildroot...
	I1014 13:55:31.995541   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetMachineName
	I1014 13:55:31.995787   25306 buildroot.go:166] provisioning hostname "ha-450021-m02"
	I1014 13:55:31.995817   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetMachineName
	I1014 13:55:31.995999   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	I1014 13:55:31.998434   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:31.998820   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:31.998841   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:31.998986   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHPort
	I1014 13:55:31.999184   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:31.999375   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:31.999496   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHUsername
	I1014 13:55:31.999675   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:55:31.999836   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1014 13:55:31.999847   25306 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-450021-m02 && echo "ha-450021-m02" | sudo tee /etc/hostname
	I1014 13:55:32.125055   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-450021-m02
	
	I1014 13:55:32.125093   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	I1014 13:55:32.128764   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.129158   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:32.129191   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.129369   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHPort
	I1014 13:55:32.129548   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:32.129704   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:32.129831   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHUsername
	I1014 13:55:32.129997   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:55:32.130195   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1014 13:55:32.130212   25306 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-450021-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-450021-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-450021-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 13:55:32.251676   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 13:55:32.251705   25306 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 13:55:32.251731   25306 buildroot.go:174] setting up certificates
	I1014 13:55:32.251744   25306 provision.go:84] configureAuth start
	I1014 13:55:32.251763   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetMachineName
	I1014 13:55:32.252028   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetIP
	I1014 13:55:32.254513   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.254862   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:32.254887   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.255045   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	I1014 13:55:32.257083   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.257408   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:32.257435   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.257565   25306 provision.go:143] copyHostCerts
	I1014 13:55:32.257592   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 13:55:32.257618   25306 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 13:55:32.257629   25306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 13:55:32.257712   25306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 13:55:32.257797   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 13:55:32.257821   25306 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 13:55:32.257831   25306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 13:55:32.257870   25306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 13:55:32.257928   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 13:55:32.257951   25306 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 13:55:32.257959   25306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 13:55:32.257986   25306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 13:55:32.258053   25306 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.ha-450021-m02 san=[127.0.0.1 192.168.39.89 ha-450021-m02 localhost minikube]
	I1014 13:55:32.418210   25306 provision.go:177] copyRemoteCerts
	I1014 13:55:32.418267   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 13:55:32.418287   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	I1014 13:55:32.421033   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.421356   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:32.421387   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.421587   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHPort
	I1014 13:55:32.421794   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:32.421949   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHUsername
	I1014 13:55:32.422067   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/id_rsa Username:docker}
	I1014 13:55:32.508850   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 13:55:32.508917   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 13:55:32.534047   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 13:55:32.534120   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1014 13:55:32.558263   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 13:55:32.558335   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 13:55:32.582102   25306 provision.go:87] duration metric: took 330.342541ms to configureAuth
	I1014 13:55:32.582134   25306 buildroot.go:189] setting minikube options for container-runtime
	I1014 13:55:32.582301   25306 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:55:32.582371   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	I1014 13:55:32.584832   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.585166   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:32.585192   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.585349   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHPort
	I1014 13:55:32.585528   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:32.585644   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:32.585802   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHUsername
	I1014 13:55:32.585929   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:55:32.586092   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1014 13:55:32.586111   25306 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 13:55:32.822330   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 13:55:32.822358   25306 main.go:141] libmachine: Checking connection to Docker...
	I1014 13:55:32.822366   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetURL
	I1014 13:55:32.823614   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Using libvirt version 6000000
	I1014 13:55:32.826190   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.826546   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:32.826567   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.826737   25306 main.go:141] libmachine: Docker is up and running!
	I1014 13:55:32.826754   25306 main.go:141] libmachine: Reticulating splines...
	I1014 13:55:32.826772   25306 client.go:171] duration metric: took 27.932717671s to LocalClient.Create
	I1014 13:55:32.826803   25306 start.go:167] duration metric: took 27.93279451s to libmachine.API.Create "ha-450021"
	I1014 13:55:32.826815   25306 start.go:293] postStartSetup for "ha-450021-m02" (driver="kvm2")
	I1014 13:55:32.826825   25306 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 13:55:32.826846   25306 main.go:141] libmachine: (ha-450021-m02) Calling .DriverName
	I1014 13:55:32.827073   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 13:55:32.827097   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	I1014 13:55:32.829440   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.829745   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:32.829785   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.829885   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHPort
	I1014 13:55:32.830054   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:32.830208   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHUsername
	I1014 13:55:32.830348   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/id_rsa Username:docker}
	I1014 13:55:32.918434   25306 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 13:55:32.922919   25306 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 13:55:32.922947   25306 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 13:55:32.923010   25306 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 13:55:32.923092   25306 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 13:55:32.923101   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> /etc/ssl/certs/150232.pem
	I1014 13:55:32.923187   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 13:55:32.933129   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 13:55:32.957819   25306 start.go:296] duration metric: took 130.989484ms for postStartSetup
	I1014 13:55:32.957871   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetConfigRaw
	I1014 13:55:32.958438   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetIP
	I1014 13:55:32.961024   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.961393   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:32.961425   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.961630   25306 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/config.json ...
	I1014 13:55:32.961835   25306 start.go:128] duration metric: took 28.087968814s to createHost
	I1014 13:55:32.961858   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	I1014 13:55:32.964121   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.964493   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:32.964528   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.964702   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHPort
	I1014 13:55:32.964854   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:32.964966   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:32.965109   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHUsername
	I1014 13:55:32.965227   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:55:32.965432   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1014 13:55:32.965446   25306 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 13:55:33.079362   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728914133.060490571
	
	I1014 13:55:33.079386   25306 fix.go:216] guest clock: 1728914133.060490571
	I1014 13:55:33.079405   25306 fix.go:229] Guest: 2024-10-14 13:55:33.060490571 +0000 UTC Remote: 2024-10-14 13:55:32.961847349 +0000 UTC m=+73.185560400 (delta=98.643222ms)
	I1014 13:55:33.079425   25306 fix.go:200] guest clock delta is within tolerance: 98.643222ms
	I1014 13:55:33.079431   25306 start.go:83] releasing machines lock for "ha-450021-m02", held for 28.205646747s
	I1014 13:55:33.079452   25306 main.go:141] libmachine: (ha-450021-m02) Calling .DriverName
	I1014 13:55:33.079689   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetIP
	I1014 13:55:33.082245   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:33.082619   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:33.082645   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:33.085035   25306 out.go:177] * Found network options:
	I1014 13:55:33.086426   25306 out.go:177]   - NO_PROXY=192.168.39.176
	W1014 13:55:33.087574   25306 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 13:55:33.087613   25306 main.go:141] libmachine: (ha-450021-m02) Calling .DriverName
	I1014 13:55:33.088138   25306 main.go:141] libmachine: (ha-450021-m02) Calling .DriverName
	I1014 13:55:33.088304   25306 main.go:141] libmachine: (ha-450021-m02) Calling .DriverName
	I1014 13:55:33.088401   25306 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 13:55:33.088445   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	W1014 13:55:33.088467   25306 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 13:55:33.088536   25306 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 13:55:33.088557   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	I1014 13:55:33.091084   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:33.091105   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:33.091497   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:33.091525   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:33.091546   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:33.091562   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:33.091675   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHPort
	I1014 13:55:33.091813   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHPort
	I1014 13:55:33.091867   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:33.091959   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:33.092027   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHUsername
	I1014 13:55:33.092088   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHUsername
	I1014 13:55:33.092156   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/id_rsa Username:docker}
	I1014 13:55:33.092203   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/id_rsa Username:docker}
	I1014 13:55:33.324240   25306 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 13:55:33.330527   25306 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 13:55:33.330586   25306 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 13:55:33.345640   25306 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 13:55:33.345657   25306 start.go:495] detecting cgroup driver to use...
	I1014 13:55:33.345701   25306 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 13:55:33.361741   25306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 13:55:33.375019   25306 docker.go:217] disabling cri-docker service (if available) ...
	I1014 13:55:33.375071   25306 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 13:55:33.388301   25306 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 13:55:33.401227   25306 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 13:55:33.511329   25306 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 13:55:33.658848   25306 docker.go:233] disabling docker service ...
	I1014 13:55:33.658913   25306 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 13:55:33.673279   25306 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 13:55:33.685917   25306 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 13:55:33.818316   25306 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 13:55:33.936222   25306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 13:55:33.950467   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 13:55:33.970208   25306 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 13:55:33.970265   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:55:33.984110   25306 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 13:55:33.984169   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:55:33.995549   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:55:34.006565   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:55:34.018479   25306 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 13:55:34.030013   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:55:34.041645   25306 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:55:34.059707   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:55:34.070442   25306 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 13:55:34.080309   25306 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 13:55:34.080366   25306 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 13:55:34.093735   25306 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 13:55:34.103445   25306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:55:34.215901   25306 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 13:55:34.308754   25306 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 13:55:34.308820   25306 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 13:55:34.313625   25306 start.go:563] Will wait 60s for crictl version
	I1014 13:55:34.313676   25306 ssh_runner.go:195] Run: which crictl
	I1014 13:55:34.317635   25306 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 13:55:34.356534   25306 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 13:55:34.356604   25306 ssh_runner.go:195] Run: crio --version
	I1014 13:55:34.384187   25306 ssh_runner.go:195] Run: crio --version
	I1014 13:55:34.414404   25306 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1014 13:55:34.415699   25306 out.go:177]   - env NO_PROXY=192.168.39.176
	I1014 13:55:34.416965   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetIP
	I1014 13:55:34.419296   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:34.419601   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:34.419628   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:34.419811   25306 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1014 13:55:34.423754   25306 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 13:55:34.435980   25306 mustload.go:65] Loading cluster: ha-450021
	I1014 13:55:34.436151   25306 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:55:34.436381   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:55:34.436419   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:55:34.450826   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35637
	I1014 13:55:34.451213   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:55:34.451655   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:55:34.451677   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:55:34.451944   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:55:34.452123   25306 main.go:141] libmachine: (ha-450021) Calling .GetState
	I1014 13:55:34.453521   25306 host.go:66] Checking if "ha-450021" exists ...
	I1014 13:55:34.453781   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:55:34.453811   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:55:34.467708   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35033
	I1014 13:55:34.468144   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:55:34.468583   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:55:34.468597   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:55:34.468863   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:55:34.469023   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:55:34.469168   25306 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021 for IP: 192.168.39.89
	I1014 13:55:34.469180   25306 certs.go:194] generating shared ca certs ...
	I1014 13:55:34.469197   25306 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:55:34.469314   25306 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 13:55:34.469365   25306 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 13:55:34.469378   25306 certs.go:256] generating profile certs ...
	I1014 13:55:34.469463   25306 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.key
	I1014 13:55:34.469494   25306 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.ffb9c796
	I1014 13:55:34.469515   25306 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.ffb9c796 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.176 192.168.39.89 192.168.39.254]
	I1014 13:55:34.810302   25306 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.ffb9c796 ...
	I1014 13:55:34.810336   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.ffb9c796: {Name:mk62309e383c07d7599f8a1200bdc69462a2d14a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:55:34.810530   25306 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.ffb9c796 ...
	I1014 13:55:34.810549   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.ffb9c796: {Name:mkf013e40a46367f5d473382a243ff918ed6f0f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:55:34.810679   25306 certs.go:381] copying /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.ffb9c796 -> /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt
	I1014 13:55:34.810843   25306 certs.go:385] copying /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.ffb9c796 -> /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key
	I1014 13:55:34.811031   25306 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key
	I1014 13:55:34.811055   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 13:55:34.811078   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 13:55:34.811100   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 13:55:34.811122   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 13:55:34.811141   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 13:55:34.811162   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 13:55:34.811184   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 13:55:34.811205   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 13:55:34.811281   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 13:55:34.811405   25306 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 13:55:34.811439   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 13:55:34.811482   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 13:55:34.811508   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 13:55:34.811530   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 13:55:34.811573   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 13:55:34.811602   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:55:34.811623   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem -> /usr/share/ca-certificates/15023.pem
	I1014 13:55:34.811635   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> /usr/share/ca-certificates/150232.pem
	I1014 13:55:34.811667   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:55:34.814657   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:55:34.815058   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:55:34.815083   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:55:34.815262   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:55:34.815417   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:55:34.815552   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:55:34.815647   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 13:55:34.891004   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1014 13:55:34.895702   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1014 13:55:34.906613   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1014 13:55:34.910438   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1014 13:55:34.923172   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1014 13:55:34.928434   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1014 13:55:34.941440   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1014 13:55:34.946469   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1014 13:55:34.957168   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1014 13:55:34.961259   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1014 13:55:34.972556   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1014 13:55:34.980332   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1014 13:55:34.991839   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 13:55:35.019053   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 13:55:35.043395   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 13:55:35.066158   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 13:55:35.088175   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1014 13:55:35.110925   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 13:55:35.134916   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 13:55:35.158129   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 13:55:35.180405   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 13:55:35.202548   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 13:55:35.225992   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 13:55:35.249981   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1014 13:55:35.266180   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1014 13:55:35.282687   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1014 13:55:35.299271   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1014 13:55:35.316623   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1014 13:55:35.332853   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1014 13:55:35.348570   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1014 13:55:35.364739   25306 ssh_runner.go:195] Run: openssl version
	I1014 13:55:35.370372   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 13:55:35.380736   25306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 13:55:35.385152   25306 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 13:55:35.385211   25306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 13:55:35.390839   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 13:55:35.401523   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 13:55:35.412185   25306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:55:35.416457   25306 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:55:35.416547   25306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:55:35.421940   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 13:55:35.432212   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 13:55:35.442100   25306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 13:55:35.446159   25306 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 13:55:35.446196   25306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 13:55:35.451427   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 13:55:35.461211   25306 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 13:55:35.465126   25306 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 13:55:35.465175   25306 kubeadm.go:934] updating node {m02 192.168.39.89 8443 v1.31.1 crio true true} ...
	I1014 13:55:35.465273   25306 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-450021-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 13:55:35.465315   25306 kube-vip.go:115] generating kube-vip config ...
	I1014 13:55:35.465353   25306 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1014 13:55:35.480860   25306 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1014 13:55:35.480912   25306 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1014 13:55:35.480953   25306 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 13:55:35.489708   25306 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1014 13:55:35.489755   25306 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1014 13:55:35.498478   25306 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1014 13:55:35.498498   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1014 13:55:35.498541   25306 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1014 13:55:35.498556   25306 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1014 13:55:35.498585   25306 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1014 13:55:35.502947   25306 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1014 13:55:35.502966   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1014 13:55:36.107052   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1014 13:55:36.107146   25306 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1014 13:55:36.112161   25306 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1014 13:55:36.112193   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1014 13:55:36.135646   25306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 13:55:36.156399   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1014 13:55:36.156509   25306 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1014 13:55:36.173587   25306 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1014 13:55:36.173634   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1014 13:55:36.629216   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1014 13:55:36.638544   25306 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1014 13:55:36.654373   25306 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 13:55:36.670100   25306 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1014 13:55:36.685420   25306 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1014 13:55:36.689062   25306 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 13:55:36.700413   25306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:55:36.822396   25306 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 13:55:36.840300   25306 host.go:66] Checking if "ha-450021" exists ...
	I1014 13:55:36.840777   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:55:36.840820   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:55:36.856367   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35101
	I1014 13:55:36.856879   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:55:36.857323   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:55:36.857351   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:55:36.857672   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:55:36.857841   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:55:36.857975   25306 start.go:317] joinCluster: &{Name:ha-450021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 13:55:36.858071   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1014 13:55:36.858091   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:55:36.860736   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:55:36.861146   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:55:36.861185   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:55:36.861337   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:55:36.861529   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:55:36.861694   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:55:36.861807   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 13:55:37.015771   25306 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 13:55:37.015819   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token n1vmb9.g7muq8my4o5hlpei --discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-450021-m02 --control-plane --apiserver-advertise-address=192.168.39.89 --apiserver-bind-port=8443"
	I1014 13:55:58.710606   25306 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token n1vmb9.g7muq8my4o5hlpei --discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-450021-m02 --control-plane --apiserver-advertise-address=192.168.39.89 --apiserver-bind-port=8443": (21.694741621s)
	I1014 13:55:58.710647   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1014 13:55:59.236903   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-450021-m02 minikube.k8s.io/updated_at=2024_10_14T13_55_59_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=ha-450021 minikube.k8s.io/primary=false
	I1014 13:55:59.350641   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-450021-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1014 13:55:59.452342   25306 start.go:319] duration metric: took 22.5943626s to joinCluster
	I1014 13:55:59.452418   25306 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 13:55:59.452735   25306 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:55:59.453925   25306 out.go:177] * Verifying Kubernetes components...
	I1014 13:55:59.454985   25306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:55:59.700035   25306 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 13:55:59.782880   25306 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 13:55:59.783215   25306 kapi.go:59] client config for ha-450021: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.crt", KeyFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.key", CAFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432aa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1014 13:55:59.783307   25306 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.176:8443
	I1014 13:55:59.783576   25306 node_ready.go:35] waiting up to 6m0s for node "ha-450021-m02" to be "Ready" ...
	I1014 13:55:59.783682   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:55:59.783696   25306 round_trippers.go:469] Request Headers:
	I1014 13:55:59.783707   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:55:59.783718   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:55:59.796335   25306 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1014 13:56:00.284246   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:00.284269   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:00.284281   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:00.284288   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:00.300499   25306 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I1014 13:56:00.784180   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:00.784204   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:00.784212   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:00.784217   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:00.811652   25306 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I1014 13:56:01.284849   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:01.284881   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:01.284893   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:01.284898   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:01.288918   25306 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 13:56:01.783917   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:01.783937   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:01.783945   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:01.783949   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:01.787799   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:01.788614   25306 node_ready.go:53] node "ha-450021-m02" has status "Ready":"False"
	I1014 13:56:02.284602   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:02.284624   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:02.284632   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:02.284642   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:02.290773   25306 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 13:56:02.783789   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:02.783815   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:02.783826   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:02.783831   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:02.788075   25306 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 13:56:03.284032   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:03.284057   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:03.284068   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:03.284074   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:03.287614   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:03.783925   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:03.783945   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:03.783953   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:03.783956   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:03.788205   25306 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 13:56:03.788893   25306 node_ready.go:53] node "ha-450021-m02" has status "Ready":"False"
	I1014 13:56:04.283968   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:04.283987   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:04.283995   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:04.283999   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:04.287325   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:04.784192   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:04.784212   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:04.784219   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:04.784225   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:04.787474   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:05.284787   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:05.284804   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:05.284813   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:05.284815   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:05.293558   25306 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1014 13:56:05.784473   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:05.784495   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:05.784505   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:05.784509   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:05.787964   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:06.283912   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:06.283936   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:06.283946   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:06.283954   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:06.286733   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:56:06.287200   25306 node_ready.go:53] node "ha-450021-m02" has status "Ready":"False"
	I1014 13:56:06.784670   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:06.784694   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:06.784706   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:06.784711   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:06.788422   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:07.283873   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:07.283901   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:07.283913   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:07.283918   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:07.286693   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:56:07.784588   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:07.784609   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:07.784617   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:07.784621   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:07.787856   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:08.284107   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:08.284126   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:08.284134   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:08.284138   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:08.287096   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:56:08.287719   25306 node_ready.go:53] node "ha-450021-m02" has status "Ready":"False"
	I1014 13:56:08.784096   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:08.784116   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:08.784124   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:08.784127   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:08.787645   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:09.284728   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:09.284752   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:09.284759   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:09.284764   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:09.288184   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:09.784057   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:09.784097   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:09.784108   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:09.784122   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:09.793007   25306 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1014 13:56:10.284378   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:10.284400   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:10.284408   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:10.284413   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:10.287852   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:10.288463   25306 node_ready.go:53] node "ha-450021-m02" has status "Ready":"False"
	I1014 13:56:10.783831   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:10.783850   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:10.783858   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:10.783862   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:10.787590   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:11.284759   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:11.284781   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:11.284790   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:11.284794   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:11.287610   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:56:11.784640   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:11.784659   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:11.784667   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:11.784672   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:11.787776   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:12.283968   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:12.283997   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:12.284009   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:12.284014   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:12.289974   25306 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 13:56:12.290779   25306 node_ready.go:53] node "ha-450021-m02" has status "Ready":"False"
	I1014 13:56:12.784021   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:12.784047   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:12.784061   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:12.784069   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:12.787917   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:13.283870   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:13.283893   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:13.283901   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:13.283905   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:13.287328   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:13.784620   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:13.784644   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:13.784653   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:13.784657   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:13.787810   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:14.283867   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:14.283892   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:14.283900   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:14.283905   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:14.287541   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:14.784419   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:14.784440   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:14.784447   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:14.784450   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:14.787853   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:14.788359   25306 node_ready.go:53] node "ha-450021-m02" has status "Ready":"False"
	I1014 13:56:15.284687   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:15.284709   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.284720   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.284726   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.287861   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:15.288461   25306 node_ready.go:49] node "ha-450021-m02" has status "Ready":"True"
	I1014 13:56:15.288480   25306 node_ready.go:38] duration metric: took 15.504881835s for node "ha-450021-m02" to be "Ready" ...
	I1014 13:56:15.288487   25306 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 13:56:15.288543   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods
	I1014 13:56:15.288553   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.288559   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.288563   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.292417   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:15.298105   25306 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-btfml" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.298175   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-btfml
	I1014 13:56:15.298182   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.298189   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.298194   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.300838   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:56:15.301679   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:15.301692   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.301699   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.301703   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.304037   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:56:15.304599   25306 pod_ready.go:93] pod "coredns-7c65d6cfc9-btfml" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:15.304614   25306 pod_ready.go:82] duration metric: took 6.489417ms for pod "coredns-7c65d6cfc9-btfml" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.304622   25306 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-h5s6h" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.304661   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-h5s6h
	I1014 13:56:15.304669   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.304683   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.304694   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.306880   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:56:15.307573   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:15.307590   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.307600   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.307610   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.309331   25306 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1014 13:56:15.309944   25306 pod_ready.go:93] pod "coredns-7c65d6cfc9-h5s6h" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:15.309963   25306 pod_ready.go:82] duration metric: took 5.334499ms for pod "coredns-7c65d6cfc9-h5s6h" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.309975   25306 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.310021   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/etcd-ha-450021
	I1014 13:56:15.310032   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.310044   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.310060   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.312281   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:56:15.312954   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:15.312972   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.312984   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.312989   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.314997   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:56:15.315561   25306 pod_ready.go:93] pod "etcd-ha-450021" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:15.315581   25306 pod_ready.go:82] duration metric: took 5.597491ms for pod "etcd-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.315592   25306 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.315648   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/etcd-ha-450021-m02
	I1014 13:56:15.315660   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.315671   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.315680   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.317496   25306 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1014 13:56:15.318188   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:15.318205   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.318217   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.318224   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.320143   25306 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1014 13:56:15.320663   25306 pod_ready.go:93] pod "etcd-ha-450021-m02" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:15.320681   25306 pod_ready.go:82] duration metric: took 5.077444ms for pod "etcd-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.320700   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.485053   25306 request.go:632] Waited for 164.298634ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450021
	I1014 13:56:15.485113   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450021
	I1014 13:56:15.485118   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.485126   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.485130   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.488373   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:15.685383   25306 request.go:632] Waited for 196.403765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:15.685451   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:15.685458   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.685469   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.685478   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.688990   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:15.689603   25306 pod_ready.go:93] pod "kube-apiserver-ha-450021" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:15.689627   25306 pod_ready.go:82] duration metric: took 368.913108ms for pod "kube-apiserver-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.689641   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.885558   25306 request.go:632] Waited for 195.846701ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450021-m02
	I1014 13:56:15.885605   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450021-m02
	I1014 13:56:15.885611   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.885618   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.885623   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.889124   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:16.084785   25306 request.go:632] Waited for 194.38123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:16.084840   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:16.084845   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:16.084853   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:16.084857   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:16.088301   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:16.088998   25306 pod_ready.go:93] pod "kube-apiserver-ha-450021-m02" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:16.089015   25306 pod_ready.go:82] duration metric: took 399.36552ms for pod "kube-apiserver-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:16.089025   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:16.285209   25306 request.go:632] Waited for 196.12444ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450021
	I1014 13:56:16.285293   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450021
	I1014 13:56:16.285302   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:16.285313   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:16.285319   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:16.289023   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:16.485127   25306 request.go:632] Waited for 195.353812ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:16.485198   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:16.485212   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:16.485224   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:16.485231   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:16.488483   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:16.489170   25306 pod_ready.go:93] pod "kube-controller-manager-ha-450021" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:16.489190   25306 pod_ready.go:82] duration metric: took 400.158231ms for pod "kube-controller-manager-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:16.489202   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:16.685336   25306 request.go:632] Waited for 196.062822ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450021-m02
	I1014 13:56:16.685418   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450021-m02
	I1014 13:56:16.685429   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:16.685440   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:16.685449   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:16.688757   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:16.884883   25306 request.go:632] Waited for 195.393841ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:16.884933   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:16.884937   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:16.884945   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:16.884950   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:16.888074   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:16.888564   25306 pod_ready.go:93] pod "kube-controller-manager-ha-450021-m02" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:16.888582   25306 pod_ready.go:82] duration metric: took 399.371713ms for pod "kube-controller-manager-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:16.888594   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dmbpv" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:17.084731   25306 request.go:632] Waited for 196.036159ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dmbpv
	I1014 13:56:17.084792   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dmbpv
	I1014 13:56:17.084799   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:17.084811   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:17.084818   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:17.088594   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:17.284774   25306 request.go:632] Waited for 195.293808ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:17.284866   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:17.284878   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:17.284889   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:17.284900   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:17.288050   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:17.288623   25306 pod_ready.go:93] pod "kube-proxy-dmbpv" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:17.288647   25306 pod_ready.go:82] duration metric: took 400.044261ms for pod "kube-proxy-dmbpv" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:17.288659   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-v24tf" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:17.485648   25306 request.go:632] Waited for 196.912408ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v24tf
	I1014 13:56:17.485723   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v24tf
	I1014 13:56:17.485734   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:17.485744   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:17.485752   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:17.488420   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:56:17.685402   25306 request.go:632] Waited for 196.37897ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:17.685455   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:17.685460   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:17.685467   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:17.685471   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:17.689419   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:17.690366   25306 pod_ready.go:93] pod "kube-proxy-v24tf" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:17.690386   25306 pod_ready.go:82] duration metric: took 401.717488ms for pod "kube-proxy-v24tf" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:17.690395   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:17.885498   25306 request.go:632] Waited for 195.043697ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450021
	I1014 13:56:17.885563   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450021
	I1014 13:56:17.885569   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:17.885576   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:17.885581   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:17.888648   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:18.085570   25306 request.go:632] Waited for 196.366356ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:18.085639   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:18.085649   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:18.085660   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:18.085668   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:18.088834   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:18.089495   25306 pod_ready.go:93] pod "kube-scheduler-ha-450021" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:18.089519   25306 pod_ready.go:82] duration metric: took 399.116695ms for pod "kube-scheduler-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:18.089532   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:18.285606   25306 request.go:632] Waited for 196.011378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450021-m02
	I1014 13:56:18.285677   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450021-m02
	I1014 13:56:18.285685   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:18.285693   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:18.285699   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:18.288947   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:18.484902   25306 request.go:632] Waited for 195.327209ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:18.484963   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:18.484970   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:18.484981   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:18.484989   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:18.488080   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:18.488592   25306 pod_ready.go:93] pod "kube-scheduler-ha-450021-m02" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:18.488612   25306 pod_ready.go:82] duration metric: took 399.071687ms for pod "kube-scheduler-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:18.488628   25306 pod_ready.go:39] duration metric: took 3.200130009s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 13:56:18.488645   25306 api_server.go:52] waiting for apiserver process to appear ...
	I1014 13:56:18.488706   25306 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 13:56:18.504222   25306 api_server.go:72] duration metric: took 19.051768004s to wait for apiserver process to appear ...
	I1014 13:56:18.504252   25306 api_server.go:88] waiting for apiserver healthz status ...
	I1014 13:56:18.504274   25306 api_server.go:253] Checking apiserver healthz at https://192.168.39.176:8443/healthz ...
	I1014 13:56:18.508419   25306 api_server.go:279] https://192.168.39.176:8443/healthz returned 200:
	ok
	I1014 13:56:18.508480   25306 round_trippers.go:463] GET https://192.168.39.176:8443/version
	I1014 13:56:18.508494   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:18.508504   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:18.508511   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:18.509353   25306 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1014 13:56:18.509470   25306 api_server.go:141] control plane version: v1.31.1
	I1014 13:56:18.509489   25306 api_server.go:131] duration metric: took 5.230064ms to wait for apiserver health ...
	I1014 13:56:18.509499   25306 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 13:56:18.684863   25306 request.go:632] Waited for 175.279951ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods
	I1014 13:56:18.684960   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods
	I1014 13:56:18.684974   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:18.684985   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:18.684994   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:18.691157   25306 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 13:56:18.697135   25306 system_pods.go:59] 17 kube-system pods found
	I1014 13:56:18.697234   25306 system_pods.go:61] "coredns-7c65d6cfc9-btfml" [292e08ef-5eec-4ebb-acf5-5b4b03e47724] Running
	I1014 13:56:18.697252   25306 system_pods.go:61] "coredns-7c65d6cfc9-h5s6h" [bf78614c-8f22-48f9-8a16-cfcffecadfcc] Running
	I1014 13:56:18.697264   25306 system_pods.go:61] "etcd-ha-450021" [d3e4a252-6d4a-4617-99f8-416ddaa8e695] Running
	I1014 13:56:18.697271   25306 system_pods.go:61] "etcd-ha-450021-m02" [d890c5b4-c756-42a4-a549-59b46d9fa0f6] Running
	I1014 13:56:18.697279   25306 system_pods.go:61] "kindnet-2ghzc" [f725a811-6a0e-433c-913d-079b7bc4742f] Running
	I1014 13:56:18.697284   25306 system_pods.go:61] "kindnet-c2xkn" [0f821123-80f9-4fe5-b64c-fb641ec185ea] Running
	I1014 13:56:18.697290   25306 system_pods.go:61] "kube-apiserver-ha-450021" [3c355a29-9ac5-466a-974f-22fc58429b98] Running
	I1014 13:56:18.697299   25306 system_pods.go:61] "kube-apiserver-ha-450021-m02" [5e9f016e-2b42-4301-964f-8e2af49d0d08] Running
	I1014 13:56:18.697305   25306 system_pods.go:61] "kube-controller-manager-ha-450021" [b002ddcb-0bb2-44f5-a779-20df99f3cab5] Running
	I1014 13:56:18.697314   25306 system_pods.go:61] "kube-controller-manager-ha-450021-m02" [f7be35b1-380c-4f40-a1d6-5522b961917c] Running
	I1014 13:56:18.697319   25306 system_pods.go:61] "kube-proxy-dmbpv" [e09737a1-c663-4951-b6cb-c0690fbd8153] Running
	I1014 13:56:18.697328   25306 system_pods.go:61] "kube-proxy-v24tf" [49b626fc-4017-45f7-a44f-43f3b311d0e0] Running
	I1014 13:56:18.697334   25306 system_pods.go:61] "kube-scheduler-ha-450021" [2f216272-b604-4f1c-ad4b-fdb874a78cf4] Running
	I1014 13:56:18.697340   25306 system_pods.go:61] "kube-scheduler-ha-450021-m02" [cfa4bb4e-6a32-4b4b-85df-2c7b1a356a4a] Running
	I1014 13:56:18.697345   25306 system_pods.go:61] "kube-vip-ha-450021" [e5340482-7ea5-4299-8096-a2f292c4bfdd] Running
	I1014 13:56:18.697350   25306 system_pods.go:61] "kube-vip-ha-450021-m02" [6a409d8d-9566-4caa-af5a-0dbe7b9f6cec] Running
	I1014 13:56:18.697356   25306 system_pods.go:61] "storage-provisioner" [1377adb3-3faf-4dee-a86e-9c4544a02d51] Running
	I1014 13:56:18.697364   25306 system_pods.go:74] duration metric: took 187.854432ms to wait for pod list to return data ...
	I1014 13:56:18.697375   25306 default_sa.go:34] waiting for default service account to be created ...
	I1014 13:56:18.884741   25306 request.go:632] Waited for 187.279644ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/default/serviceaccounts
	I1014 13:56:18.884797   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/default/serviceaccounts
	I1014 13:56:18.884802   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:18.884809   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:18.884813   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:18.888582   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:18.888812   25306 default_sa.go:45] found service account: "default"
	I1014 13:56:18.888830   25306 default_sa.go:55] duration metric: took 191.448571ms for default service account to be created ...
	I1014 13:56:18.888841   25306 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 13:56:19.085294   25306 request.go:632] Waited for 196.363765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods
	I1014 13:56:19.085358   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods
	I1014 13:56:19.085366   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:19.085377   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:19.085383   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:19.092864   25306 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1014 13:56:19.097323   25306 system_pods.go:86] 17 kube-system pods found
	I1014 13:56:19.097351   25306 system_pods.go:89] "coredns-7c65d6cfc9-btfml" [292e08ef-5eec-4ebb-acf5-5b4b03e47724] Running
	I1014 13:56:19.097357   25306 system_pods.go:89] "coredns-7c65d6cfc9-h5s6h" [bf78614c-8f22-48f9-8a16-cfcffecadfcc] Running
	I1014 13:56:19.097362   25306 system_pods.go:89] "etcd-ha-450021" [d3e4a252-6d4a-4617-99f8-416ddaa8e695] Running
	I1014 13:56:19.097366   25306 system_pods.go:89] "etcd-ha-450021-m02" [d890c5b4-c756-42a4-a549-59b46d9fa0f6] Running
	I1014 13:56:19.097370   25306 system_pods.go:89] "kindnet-2ghzc" [f725a811-6a0e-433c-913d-079b7bc4742f] Running
	I1014 13:56:19.097374   25306 system_pods.go:89] "kindnet-c2xkn" [0f821123-80f9-4fe5-b64c-fb641ec185ea] Running
	I1014 13:56:19.097377   25306 system_pods.go:89] "kube-apiserver-ha-450021" [3c355a29-9ac5-466a-974f-22fc58429b98] Running
	I1014 13:56:19.097382   25306 system_pods.go:89] "kube-apiserver-ha-450021-m02" [5e9f016e-2b42-4301-964f-8e2af49d0d08] Running
	I1014 13:56:19.097387   25306 system_pods.go:89] "kube-controller-manager-ha-450021" [b002ddcb-0bb2-44f5-a779-20df99f3cab5] Running
	I1014 13:56:19.097390   25306 system_pods.go:89] "kube-controller-manager-ha-450021-m02" [f7be35b1-380c-4f40-a1d6-5522b961917c] Running
	I1014 13:56:19.097394   25306 system_pods.go:89] "kube-proxy-dmbpv" [e09737a1-c663-4951-b6cb-c0690fbd8153] Running
	I1014 13:56:19.097398   25306 system_pods.go:89] "kube-proxy-v24tf" [49b626fc-4017-45f7-a44f-43f3b311d0e0] Running
	I1014 13:56:19.097402   25306 system_pods.go:89] "kube-scheduler-ha-450021" [2f216272-b604-4f1c-ad4b-fdb874a78cf4] Running
	I1014 13:56:19.097411   25306 system_pods.go:89] "kube-scheduler-ha-450021-m02" [cfa4bb4e-6a32-4b4b-85df-2c7b1a356a4a] Running
	I1014 13:56:19.097417   25306 system_pods.go:89] "kube-vip-ha-450021" [e5340482-7ea5-4299-8096-a2f292c4bfdd] Running
	I1014 13:56:19.097420   25306 system_pods.go:89] "kube-vip-ha-450021-m02" [6a409d8d-9566-4caa-af5a-0dbe7b9f6cec] Running
	I1014 13:56:19.097423   25306 system_pods.go:89] "storage-provisioner" [1377adb3-3faf-4dee-a86e-9c4544a02d51] Running
	I1014 13:56:19.097429   25306 system_pods.go:126] duration metric: took 208.581366ms to wait for k8s-apps to be running ...
	I1014 13:56:19.097436   25306 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 13:56:19.097477   25306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 13:56:19.112071   25306 system_svc.go:56] duration metric: took 14.628482ms WaitForService to wait for kubelet
	I1014 13:56:19.112097   25306 kubeadm.go:582] duration metric: took 19.659648051s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 13:56:19.112113   25306 node_conditions.go:102] verifying NodePressure condition ...
	I1014 13:56:19.285537   25306 request.go:632] Waited for 173.355083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes
	I1014 13:56:19.285629   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes
	I1014 13:56:19.285637   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:19.285649   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:19.285654   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:19.289726   25306 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 13:56:19.290673   25306 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 13:56:19.290698   25306 node_conditions.go:123] node cpu capacity is 2
	I1014 13:56:19.290712   25306 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 13:56:19.290717   25306 node_conditions.go:123] node cpu capacity is 2
	I1014 13:56:19.290723   25306 node_conditions.go:105] duration metric: took 178.605419ms to run NodePressure ...
	I1014 13:56:19.290740   25306 start.go:241] waiting for startup goroutines ...
	I1014 13:56:19.290784   25306 start.go:255] writing updated cluster config ...
	I1014 13:56:19.292978   25306 out.go:201] 
	I1014 13:56:19.294410   25306 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:56:19.294496   25306 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/config.json ...
	I1014 13:56:19.296041   25306 out.go:177] * Starting "ha-450021-m03" control-plane node in "ha-450021" cluster
	I1014 13:56:19.297096   25306 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 13:56:19.297116   25306 cache.go:56] Caching tarball of preloaded images
	I1014 13:56:19.297204   25306 preload.go:172] Found /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 13:56:19.297214   25306 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1014 13:56:19.297292   25306 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/config.json ...
	I1014 13:56:19.297485   25306 start.go:360] acquireMachinesLock for ha-450021-m03: {Name:mk0b928e3a7c606d1470b2064477a22c5e59969d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 13:56:19.297521   25306 start.go:364] duration metric: took 20.106µs to acquireMachinesLock for "ha-450021-m03"
	I1014 13:56:19.297537   25306 start.go:93] Provisioning new machine with config: &{Name:ha-450021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 13:56:19.297616   25306 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1014 13:56:19.299122   25306 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 13:56:19.299222   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:56:19.299255   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:56:19.313918   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33835
	I1014 13:56:19.314305   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:56:19.314837   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:56:19.314851   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:56:19.315181   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:56:19.315347   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetMachineName
	I1014 13:56:19.315509   25306 main.go:141] libmachine: (ha-450021-m03) Calling .DriverName
	I1014 13:56:19.315639   25306 start.go:159] libmachine.API.Create for "ha-450021" (driver="kvm2")
	I1014 13:56:19.315670   25306 client.go:168] LocalClient.Create starting
	I1014 13:56:19.315704   25306 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem
	I1014 13:56:19.315748   25306 main.go:141] libmachine: Decoding PEM data...
	I1014 13:56:19.315768   25306 main.go:141] libmachine: Parsing certificate...
	I1014 13:56:19.315834   25306 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem
	I1014 13:56:19.315859   25306 main.go:141] libmachine: Decoding PEM data...
	I1014 13:56:19.315870   25306 main.go:141] libmachine: Parsing certificate...
	I1014 13:56:19.315884   25306 main.go:141] libmachine: Running pre-create checks...
	I1014 13:56:19.315892   25306 main.go:141] libmachine: (ha-450021-m03) Calling .PreCreateCheck
	I1014 13:56:19.316068   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetConfigRaw
	I1014 13:56:19.316425   25306 main.go:141] libmachine: Creating machine...
	I1014 13:56:19.316438   25306 main.go:141] libmachine: (ha-450021-m03) Calling .Create
	I1014 13:56:19.316586   25306 main.go:141] libmachine: (ha-450021-m03) Creating KVM machine...
	I1014 13:56:19.317686   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found existing default KVM network
	I1014 13:56:19.317799   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found existing private KVM network mk-ha-450021
	I1014 13:56:19.317961   25306 main.go:141] libmachine: (ha-450021-m03) Setting up store path in /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03 ...
	I1014 13:56:19.317988   25306 main.go:141] libmachine: (ha-450021-m03) Building disk image from file:///home/jenkins/minikube-integration/19790-7836/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1014 13:56:19.318035   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:19.317950   26053 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 13:56:19.318138   25306 main.go:141] libmachine: (ha-450021-m03) Downloading /home/jenkins/minikube-integration/19790-7836/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19790-7836/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1014 13:56:19.552577   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:19.552461   26053 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/id_rsa...
	I1014 13:56:19.731749   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:19.731620   26053 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/ha-450021-m03.rawdisk...
	I1014 13:56:19.731783   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Writing magic tar header
	I1014 13:56:19.731797   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Writing SSH key tar header
	I1014 13:56:19.731814   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:19.731727   26053 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03 ...
	I1014 13:56:19.731831   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03
	I1014 13:56:19.731859   25306 main.go:141] libmachine: (ha-450021-m03) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03 (perms=drwx------)
	I1014 13:56:19.731873   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube/machines
	I1014 13:56:19.731885   25306 main.go:141] libmachine: (ha-450021-m03) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube/machines (perms=drwxr-xr-x)
	I1014 13:56:19.731899   25306 main.go:141] libmachine: (ha-450021-m03) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube (perms=drwxr-xr-x)
	I1014 13:56:19.731913   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 13:56:19.731942   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836
	I1014 13:56:19.731955   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1014 13:56:19.731964   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Checking permissions on dir: /home/jenkins
	I1014 13:56:19.731973   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Checking permissions on dir: /home
	I1014 13:56:19.731985   25306 main.go:141] libmachine: (ha-450021-m03) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836 (perms=drwxrwxr-x)
	I1014 13:56:19.732001   25306 main.go:141] libmachine: (ha-450021-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1014 13:56:19.732012   25306 main.go:141] libmachine: (ha-450021-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1014 13:56:19.732026   25306 main.go:141] libmachine: (ha-450021-m03) Creating domain...
	I1014 13:56:19.732040   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Skipping /home - not owner
	I1014 13:56:19.732949   25306 main.go:141] libmachine: (ha-450021-m03) define libvirt domain using xml: 
	I1014 13:56:19.732973   25306 main.go:141] libmachine: (ha-450021-m03) <domain type='kvm'>
	I1014 13:56:19.732984   25306 main.go:141] libmachine: (ha-450021-m03)   <name>ha-450021-m03</name>
	I1014 13:56:19.732992   25306 main.go:141] libmachine: (ha-450021-m03)   <memory unit='MiB'>2200</memory>
	I1014 13:56:19.733004   25306 main.go:141] libmachine: (ha-450021-m03)   <vcpu>2</vcpu>
	I1014 13:56:19.733014   25306 main.go:141] libmachine: (ha-450021-m03)   <features>
	I1014 13:56:19.733021   25306 main.go:141] libmachine: (ha-450021-m03)     <acpi/>
	I1014 13:56:19.733031   25306 main.go:141] libmachine: (ha-450021-m03)     <apic/>
	I1014 13:56:19.733038   25306 main.go:141] libmachine: (ha-450021-m03)     <pae/>
	I1014 13:56:19.733044   25306 main.go:141] libmachine: (ha-450021-m03)     
	I1014 13:56:19.733056   25306 main.go:141] libmachine: (ha-450021-m03)   </features>
	I1014 13:56:19.733071   25306 main.go:141] libmachine: (ha-450021-m03)   <cpu mode='host-passthrough'>
	I1014 13:56:19.733081   25306 main.go:141] libmachine: (ha-450021-m03)   
	I1014 13:56:19.733089   25306 main.go:141] libmachine: (ha-450021-m03)   </cpu>
	I1014 13:56:19.733099   25306 main.go:141] libmachine: (ha-450021-m03)   <os>
	I1014 13:56:19.733106   25306 main.go:141] libmachine: (ha-450021-m03)     <type>hvm</type>
	I1014 13:56:19.733117   25306 main.go:141] libmachine: (ha-450021-m03)     <boot dev='cdrom'/>
	I1014 13:56:19.733126   25306 main.go:141] libmachine: (ha-450021-m03)     <boot dev='hd'/>
	I1014 13:56:19.733136   25306 main.go:141] libmachine: (ha-450021-m03)     <bootmenu enable='no'/>
	I1014 13:56:19.733151   25306 main.go:141] libmachine: (ha-450021-m03)   </os>
	I1014 13:56:19.733160   25306 main.go:141] libmachine: (ha-450021-m03)   <devices>
	I1014 13:56:19.733169   25306 main.go:141] libmachine: (ha-450021-m03)     <disk type='file' device='cdrom'>
	I1014 13:56:19.733183   25306 main.go:141] libmachine: (ha-450021-m03)       <source file='/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/boot2docker.iso'/>
	I1014 13:56:19.733196   25306 main.go:141] libmachine: (ha-450021-m03)       <target dev='hdc' bus='scsi'/>
	I1014 13:56:19.733209   25306 main.go:141] libmachine: (ha-450021-m03)       <readonly/>
	I1014 13:56:19.733218   25306 main.go:141] libmachine: (ha-450021-m03)     </disk>
	I1014 13:56:19.733227   25306 main.go:141] libmachine: (ha-450021-m03)     <disk type='file' device='disk'>
	I1014 13:56:19.733239   25306 main.go:141] libmachine: (ha-450021-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1014 13:56:19.733252   25306 main.go:141] libmachine: (ha-450021-m03)       <source file='/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/ha-450021-m03.rawdisk'/>
	I1014 13:56:19.733266   25306 main.go:141] libmachine: (ha-450021-m03)       <target dev='hda' bus='virtio'/>
	I1014 13:56:19.733278   25306 main.go:141] libmachine: (ha-450021-m03)     </disk>
	I1014 13:56:19.733286   25306 main.go:141] libmachine: (ha-450021-m03)     <interface type='network'>
	I1014 13:56:19.733298   25306 main.go:141] libmachine: (ha-450021-m03)       <source network='mk-ha-450021'/>
	I1014 13:56:19.733306   25306 main.go:141] libmachine: (ha-450021-m03)       <model type='virtio'/>
	I1014 13:56:19.733315   25306 main.go:141] libmachine: (ha-450021-m03)     </interface>
	I1014 13:56:19.733325   25306 main.go:141] libmachine: (ha-450021-m03)     <interface type='network'>
	I1014 13:56:19.733356   25306 main.go:141] libmachine: (ha-450021-m03)       <source network='default'/>
	I1014 13:56:19.733373   25306 main.go:141] libmachine: (ha-450021-m03)       <model type='virtio'/>
	I1014 13:56:19.733379   25306 main.go:141] libmachine: (ha-450021-m03)     </interface>
	I1014 13:56:19.733383   25306 main.go:141] libmachine: (ha-450021-m03)     <serial type='pty'>
	I1014 13:56:19.733387   25306 main.go:141] libmachine: (ha-450021-m03)       <target port='0'/>
	I1014 13:56:19.733394   25306 main.go:141] libmachine: (ha-450021-m03)     </serial>
	I1014 13:56:19.733399   25306 main.go:141] libmachine: (ha-450021-m03)     <console type='pty'>
	I1014 13:56:19.733403   25306 main.go:141] libmachine: (ha-450021-m03)       <target type='serial' port='0'/>
	I1014 13:56:19.733410   25306 main.go:141] libmachine: (ha-450021-m03)     </console>
	I1014 13:56:19.733415   25306 main.go:141] libmachine: (ha-450021-m03)     <rng model='virtio'>
	I1014 13:56:19.733430   25306 main.go:141] libmachine: (ha-450021-m03)       <backend model='random'>/dev/random</backend>
	I1014 13:56:19.733436   25306 main.go:141] libmachine: (ha-450021-m03)     </rng>
	I1014 13:56:19.733441   25306 main.go:141] libmachine: (ha-450021-m03)     
	I1014 13:56:19.733445   25306 main.go:141] libmachine: (ha-450021-m03)     
	I1014 13:56:19.733449   25306 main.go:141] libmachine: (ha-450021-m03)   </devices>
	I1014 13:56:19.733455   25306 main.go:141] libmachine: (ha-450021-m03) </domain>
	I1014 13:56:19.733462   25306 main.go:141] libmachine: (ha-450021-m03) 
	I1014 13:56:19.740127   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:3e:d5:3c in network default
	I1014 13:56:19.740688   25306 main.go:141] libmachine: (ha-450021-m03) Ensuring networks are active...
	I1014 13:56:19.740710   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:19.741382   25306 main.go:141] libmachine: (ha-450021-m03) Ensuring network default is active
	I1014 13:56:19.741753   25306 main.go:141] libmachine: (ha-450021-m03) Ensuring network mk-ha-450021 is active
	I1014 13:56:19.742099   25306 main.go:141] libmachine: (ha-450021-m03) Getting domain xml...
	I1014 13:56:19.742834   25306 main.go:141] libmachine: (ha-450021-m03) Creating domain...
	I1014 13:56:21.010084   25306 main.go:141] libmachine: (ha-450021-m03) Waiting to get IP...
	I1014 13:56:21.010944   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:21.011316   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:21.011377   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:21.011315   26053 retry.go:31] will retry after 306.133794ms: waiting for machine to come up
	I1014 13:56:21.318826   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:21.319333   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:21.319361   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:21.319280   26053 retry.go:31] will retry after 366.66223ms: waiting for machine to come up
	I1014 13:56:21.687816   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:21.688312   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:21.688353   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:21.688274   26053 retry.go:31] will retry after 390.93754ms: waiting for machine to come up
	I1014 13:56:22.080797   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:22.081263   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:22.081290   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:22.081223   26053 retry.go:31] will retry after 398.805239ms: waiting for machine to come up
	I1014 13:56:22.481851   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:22.482319   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:22.482343   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:22.482287   26053 retry.go:31] will retry after 640.042779ms: waiting for machine to come up
	I1014 13:56:23.123714   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:23.124086   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:23.124144   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:23.124073   26053 retry.go:31] will retry after 920.9874ms: waiting for machine to come up
	I1014 13:56:24.047070   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:24.047392   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:24.047414   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:24.047351   26053 retry.go:31] will retry after 897.422021ms: waiting for machine to come up
	I1014 13:56:24.946948   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:24.947347   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:24.947372   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:24.947310   26053 retry.go:31] will retry after 1.40276044s: waiting for machine to come up
	I1014 13:56:26.351855   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:26.352313   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:26.352340   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:26.352279   26053 retry.go:31] will retry after 1.726907493s: waiting for machine to come up
	I1014 13:56:28.080396   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:28.080846   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:28.080875   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:28.080790   26053 retry.go:31] will retry after 1.482180268s: waiting for machine to come up
	I1014 13:56:29.564825   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:29.565318   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:29.565340   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:29.565288   26053 retry.go:31] will retry after 2.541525756s: waiting for machine to come up
	I1014 13:56:32.109990   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:32.110440   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:32.110469   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:32.110395   26053 retry.go:31] will retry after 2.914830343s: waiting for machine to come up
	I1014 13:56:35.026789   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:35.027206   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:35.027240   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:35.027152   26053 retry.go:31] will retry after 3.572900713s: waiting for machine to come up
	I1014 13:56:38.603496   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:38.603914   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:38.603943   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:38.603867   26053 retry.go:31] will retry after 3.566960315s: waiting for machine to come up
	I1014 13:56:42.173796   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:42.174271   25306 main.go:141] libmachine: (ha-450021-m03) Found IP for machine: 192.168.39.55
	I1014 13:56:42.174288   25306 main.go:141] libmachine: (ha-450021-m03) Reserving static IP address...
	I1014 13:56:42.174301   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has current primary IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:42.174679   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find host DHCP lease matching {name: "ha-450021-m03", mac: "52:54:00:af:04:2c", ip: "192.168.39.55"} in network mk-ha-450021
	I1014 13:56:42.249586   25306 main.go:141] libmachine: (ha-450021-m03) Reserved static IP address: 192.168.39.55
	I1014 13:56:42.249623   25306 main.go:141] libmachine: (ha-450021-m03) Waiting for SSH to be available...
	I1014 13:56:42.249632   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Getting to WaitForSSH function...
	I1014 13:56:42.252725   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:42.253185   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021
	I1014 13:56:42.253208   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find defined IP address of network mk-ha-450021 interface with MAC address 52:54:00:af:04:2c
	I1014 13:56:42.253434   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Using SSH client type: external
	I1014 13:56:42.253458   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/id_rsa (-rw-------)
	I1014 13:56:42.253486   25306 main.go:141] libmachine: (ha-450021-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 13:56:42.253504   25306 main.go:141] libmachine: (ha-450021-m03) DBG | About to run SSH command:
	I1014 13:56:42.253518   25306 main.go:141] libmachine: (ha-450021-m03) DBG | exit 0
	I1014 13:56:42.256978   25306 main.go:141] libmachine: (ha-450021-m03) DBG | SSH cmd err, output: exit status 255: 
	I1014 13:56:42.256996   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1014 13:56:42.257003   25306 main.go:141] libmachine: (ha-450021-m03) DBG | command : exit 0
	I1014 13:56:42.257008   25306 main.go:141] libmachine: (ha-450021-m03) DBG | err     : exit status 255
	I1014 13:56:42.257014   25306 main.go:141] libmachine: (ha-450021-m03) DBG | output  : 
	I1014 13:56:45.257522   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Getting to WaitForSSH function...
	I1014 13:56:45.260212   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.260696   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:45.260726   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.260786   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Using SSH client type: external
	I1014 13:56:45.260815   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/id_rsa (-rw-------)
	I1014 13:56:45.260836   25306 main.go:141] libmachine: (ha-450021-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.55 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 13:56:45.260845   25306 main.go:141] libmachine: (ha-450021-m03) DBG | About to run SSH command:
	I1014 13:56:45.260853   25306 main.go:141] libmachine: (ha-450021-m03) DBG | exit 0
	I1014 13:56:45.382585   25306 main.go:141] libmachine: (ha-450021-m03) DBG | SSH cmd err, output: <nil>: 
	I1014 13:56:45.382879   25306 main.go:141] libmachine: (ha-450021-m03) KVM machine creation complete!
	I1014 13:56:45.383199   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetConfigRaw
	I1014 13:56:45.383711   25306 main.go:141] libmachine: (ha-450021-m03) Calling .DriverName
	I1014 13:56:45.383880   25306 main.go:141] libmachine: (ha-450021-m03) Calling .DriverName
	I1014 13:56:45.384004   25306 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1014 13:56:45.384014   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetState
	I1014 13:56:45.385264   25306 main.go:141] libmachine: Detecting operating system of created instance...
	I1014 13:56:45.385276   25306 main.go:141] libmachine: Waiting for SSH to be available...
	I1014 13:56:45.385281   25306 main.go:141] libmachine: Getting to WaitForSSH function...
	I1014 13:56:45.385287   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	I1014 13:56:45.387787   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.388084   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:45.388108   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.388291   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHPort
	I1014 13:56:45.388456   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:45.388593   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:45.388714   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHUsername
	I1014 13:56:45.388830   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:56:45.389029   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I1014 13:56:45.389040   25306 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1014 13:56:45.485735   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 13:56:45.485758   25306 main.go:141] libmachine: Detecting the provisioner...
	I1014 13:56:45.485768   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	I1014 13:56:45.488882   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.489166   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:45.489189   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.489303   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHPort
	I1014 13:56:45.489486   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:45.489610   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:45.489751   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHUsername
	I1014 13:56:45.489875   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:56:45.490046   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I1014 13:56:45.490060   25306 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1014 13:56:45.587324   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1014 13:56:45.587394   25306 main.go:141] libmachine: found compatible host: buildroot
	I1014 13:56:45.587407   25306 main.go:141] libmachine: Provisioning with buildroot...
	I1014 13:56:45.587422   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetMachineName
	I1014 13:56:45.587668   25306 buildroot.go:166] provisioning hostname "ha-450021-m03"
	I1014 13:56:45.587694   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetMachineName
	I1014 13:56:45.587891   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	I1014 13:56:45.589987   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.590329   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:45.590355   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.590484   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHPort
	I1014 13:56:45.590650   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:45.590770   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:45.590887   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHUsername
	I1014 13:56:45.591045   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:56:45.591197   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I1014 13:56:45.591208   25306 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-450021-m03 && echo "ha-450021-m03" | sudo tee /etc/hostname
	I1014 13:56:45.708548   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-450021-m03
	
	I1014 13:56:45.708578   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	I1014 13:56:45.711602   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.711972   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:45.711996   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.712173   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHPort
	I1014 13:56:45.712328   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:45.712490   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:45.712610   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHUsername
	I1014 13:56:45.712744   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:56:45.712915   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I1014 13:56:45.712938   25306 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-450021-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-450021-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-450021-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 13:56:45.819779   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 13:56:45.819813   25306 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 13:56:45.819833   25306 buildroot.go:174] setting up certificates
	I1014 13:56:45.819844   25306 provision.go:84] configureAuth start
	I1014 13:56:45.819857   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetMachineName
	I1014 13:56:45.820154   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetIP
	I1014 13:56:45.823118   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.823460   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:45.823487   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.823678   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	I1014 13:56:45.825593   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.825969   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:45.826000   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.826082   25306 provision.go:143] copyHostCerts
	I1014 13:56:45.826120   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 13:56:45.826162   25306 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 13:56:45.826174   25306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 13:56:45.826256   25306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 13:56:45.826387   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 13:56:45.826414   25306 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 13:56:45.826422   25306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 13:56:45.826470   25306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 13:56:45.826533   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 13:56:45.826559   25306 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 13:56:45.826567   25306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 13:56:45.826616   25306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 13:56:45.826689   25306 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.ha-450021-m03 san=[127.0.0.1 192.168.39.55 ha-450021-m03 localhost minikube]
	I1014 13:56:45.954899   25306 provision.go:177] copyRemoteCerts
	I1014 13:56:45.954971   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 13:56:45.955000   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	I1014 13:56:45.957506   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.957791   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:45.957818   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.957960   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHPort
	I1014 13:56:45.958125   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:45.958305   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHUsername
	I1014 13:56:45.958436   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/id_rsa Username:docker}
	I1014 13:56:46.036842   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 13:56:46.036916   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 13:56:46.062450   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 13:56:46.062515   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1014 13:56:46.086853   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 13:56:46.086926   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 13:56:46.115352   25306 provision.go:87] duration metric: took 295.495227ms to configureAuth
	I1014 13:56:46.115379   25306 buildroot.go:189] setting minikube options for container-runtime
	I1014 13:56:46.115621   25306 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:56:46.115716   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	I1014 13:56:46.118262   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.118631   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:46.118656   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.118842   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHPort
	I1014 13:56:46.119017   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:46.119154   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:46.119286   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHUsername
	I1014 13:56:46.119431   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:56:46.119582   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I1014 13:56:46.119596   25306 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 13:56:46.343295   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 13:56:46.343323   25306 main.go:141] libmachine: Checking connection to Docker...
	I1014 13:56:46.343334   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetURL
	I1014 13:56:46.344763   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Using libvirt version 6000000
	I1014 13:56:46.346964   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.347332   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:46.347354   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.347553   25306 main.go:141] libmachine: Docker is up and running!
	I1014 13:56:46.347568   25306 main.go:141] libmachine: Reticulating splines...
	I1014 13:56:46.347575   25306 client.go:171] duration metric: took 27.031894224s to LocalClient.Create
	I1014 13:56:46.347595   25306 start.go:167] duration metric: took 27.031958272s to libmachine.API.Create "ha-450021"
	I1014 13:56:46.347605   25306 start.go:293] postStartSetup for "ha-450021-m03" (driver="kvm2")
	I1014 13:56:46.347614   25306 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 13:56:46.347629   25306 main.go:141] libmachine: (ha-450021-m03) Calling .DriverName
	I1014 13:56:46.347825   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 13:56:46.347855   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	I1014 13:56:46.350344   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.350734   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:46.350754   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.350907   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHPort
	I1014 13:56:46.351098   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:46.351237   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHUsername
	I1014 13:56:46.351388   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/id_rsa Username:docker}
	I1014 13:56:46.433896   25306 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 13:56:46.438009   25306 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 13:56:46.438030   25306 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 13:56:46.438090   25306 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 13:56:46.438161   25306 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 13:56:46.438171   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> /etc/ssl/certs/150232.pem
	I1014 13:56:46.438246   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 13:56:46.448052   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 13:56:46.472253   25306 start.go:296] duration metric: took 124.635752ms for postStartSetup
	I1014 13:56:46.472307   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetConfigRaw
	I1014 13:56:46.472896   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetIP
	I1014 13:56:46.475688   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.476037   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:46.476063   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.476352   25306 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/config.json ...
	I1014 13:56:46.476544   25306 start.go:128] duration metric: took 27.178917299s to createHost
	I1014 13:56:46.476567   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	I1014 13:56:46.478884   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.479221   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:46.479251   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.479355   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHPort
	I1014 13:56:46.479528   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:46.479638   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:46.479747   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHUsername
	I1014 13:56:46.479874   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:56:46.480025   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I1014 13:56:46.480035   25306 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 13:56:46.583399   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728914206.561472302
	
	I1014 13:56:46.583425   25306 fix.go:216] guest clock: 1728914206.561472302
	I1014 13:56:46.583435   25306 fix.go:229] Guest: 2024-10-14 13:56:46.561472302 +0000 UTC Remote: 2024-10-14 13:56:46.476556325 +0000 UTC m=+146.700269378 (delta=84.915977ms)
	I1014 13:56:46.583455   25306 fix.go:200] guest clock delta is within tolerance: 84.915977ms
	I1014 13:56:46.583460   25306 start.go:83] releasing machines lock for "ha-450021-m03", held for 27.285931106s
	I1014 13:56:46.583477   25306 main.go:141] libmachine: (ha-450021-m03) Calling .DriverName
	I1014 13:56:46.583714   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetIP
	I1014 13:56:46.586281   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.586554   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:46.586578   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.589268   25306 out.go:177] * Found network options:
	I1014 13:56:46.590896   25306 out.go:177]   - NO_PROXY=192.168.39.176,192.168.39.89
	W1014 13:56:46.592325   25306 proxy.go:119] fail to check proxy env: Error ip not in block
	W1014 13:56:46.592354   25306 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 13:56:46.592374   25306 main.go:141] libmachine: (ha-450021-m03) Calling .DriverName
	I1014 13:56:46.592957   25306 main.go:141] libmachine: (ha-450021-m03) Calling .DriverName
	I1014 13:56:46.593143   25306 main.go:141] libmachine: (ha-450021-m03) Calling .DriverName
	I1014 13:56:46.593217   25306 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 13:56:46.593262   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	W1014 13:56:46.593451   25306 proxy.go:119] fail to check proxy env: Error ip not in block
	W1014 13:56:46.593472   25306 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 13:56:46.593517   25306 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 13:56:46.593532   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	I1014 13:56:46.596078   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.596267   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.596474   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:46.596494   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.596667   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHPort
	I1014 13:56:46.596762   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:46.596784   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.596836   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:46.596933   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHPort
	I1014 13:56:46.597000   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHUsername
	I1014 13:56:46.597050   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:46.597134   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/id_rsa Username:docker}
	I1014 13:56:46.597191   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHUsername
	I1014 13:56:46.597299   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/id_rsa Username:docker}
	I1014 13:56:46.829516   25306 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 13:56:46.836362   25306 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 13:56:46.836435   25306 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 13:56:46.855005   25306 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 13:56:46.855034   25306 start.go:495] detecting cgroup driver to use...
	I1014 13:56:46.855101   25306 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 13:56:46.873805   25306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 13:56:46.888317   25306 docker.go:217] disabling cri-docker service (if available) ...
	I1014 13:56:46.888368   25306 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 13:56:46.902770   25306 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 13:56:46.916283   25306 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 13:56:47.031570   25306 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 13:56:47.186900   25306 docker.go:233] disabling docker service ...
	I1014 13:56:47.186971   25306 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 13:56:47.202040   25306 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 13:56:47.215421   25306 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 13:56:47.352807   25306 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 13:56:47.479560   25306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 13:56:47.493558   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 13:56:47.511643   25306 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 13:56:47.511704   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:56:47.521941   25306 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 13:56:47.522055   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:56:47.534488   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:56:47.545529   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:56:47.555346   25306 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 13:56:47.565221   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:56:47.574851   25306 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:56:47.591247   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:56:47.601017   25306 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 13:56:47.610150   25306 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 13:56:47.610208   25306 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 13:56:47.623643   25306 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 13:56:47.632860   25306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:56:47.769053   25306 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 13:56:47.859548   25306 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 13:56:47.859617   25306 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 13:56:47.864769   25306 start.go:563] Will wait 60s for crictl version
	I1014 13:56:47.864838   25306 ssh_runner.go:195] Run: which crictl
	I1014 13:56:47.868622   25306 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 13:56:47.912151   25306 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 13:56:47.912224   25306 ssh_runner.go:195] Run: crio --version
	I1014 13:56:47.943678   25306 ssh_runner.go:195] Run: crio --version
	I1014 13:56:47.974464   25306 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1014 13:56:47.975982   25306 out.go:177]   - env NO_PROXY=192.168.39.176
	I1014 13:56:47.977421   25306 out.go:177]   - env NO_PROXY=192.168.39.176,192.168.39.89
	I1014 13:56:47.978761   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetIP
	I1014 13:56:47.981382   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:47.981851   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:47.981880   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:47.982078   25306 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1014 13:56:47.986330   25306 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 13:56:47.999765   25306 mustload.go:65] Loading cluster: ha-450021
	I1014 13:56:47.999983   25306 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:56:48.000276   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:56:48.000314   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:56:48.015013   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38251
	I1014 13:56:48.015440   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:56:48.015880   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:56:48.015898   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:56:48.016248   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:56:48.016426   25306 main.go:141] libmachine: (ha-450021) Calling .GetState
	I1014 13:56:48.017904   25306 host.go:66] Checking if "ha-450021" exists ...
	I1014 13:56:48.018185   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:56:48.018221   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:56:48.032080   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38143
	I1014 13:56:48.032532   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:56:48.033010   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:56:48.033034   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:56:48.033376   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:56:48.033566   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:56:48.033738   25306 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021 for IP: 192.168.39.55
	I1014 13:56:48.033750   25306 certs.go:194] generating shared ca certs ...
	I1014 13:56:48.033771   25306 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:56:48.033910   25306 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 13:56:48.033951   25306 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 13:56:48.033962   25306 certs.go:256] generating profile certs ...
	I1014 13:56:48.034054   25306 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.key
	I1014 13:56:48.034099   25306 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.b8fc6ee2
	I1014 13:56:48.034119   25306 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.b8fc6ee2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.176 192.168.39.89 192.168.39.55 192.168.39.254]
	I1014 13:56:48.250009   25306 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.b8fc6ee2 ...
	I1014 13:56:48.250065   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.b8fc6ee2: {Name:mk915feb36aa4db7e40387e7070135b42d923437 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:56:48.250246   25306 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.b8fc6ee2 ...
	I1014 13:56:48.250260   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.b8fc6ee2: {Name:mk5df80a68a940fb5e6799020daa8453d1ca5d79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:56:48.250346   25306 certs.go:381] copying /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.b8fc6ee2 -> /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt
	I1014 13:56:48.250480   25306 certs.go:385] copying /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.b8fc6ee2 -> /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key
	I1014 13:56:48.250647   25306 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key
	I1014 13:56:48.250665   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 13:56:48.250682   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 13:56:48.250698   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 13:56:48.250714   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 13:56:48.250729   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 13:56:48.250744   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 13:56:48.250759   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 13:56:48.282713   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 13:56:48.282807   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 13:56:48.282843   25306 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 13:56:48.282853   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 13:56:48.282876   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 13:56:48.282899   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 13:56:48.282919   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 13:56:48.282958   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 13:56:48.282987   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem -> /usr/share/ca-certificates/15023.pem
	I1014 13:56:48.283001   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> /usr/share/ca-certificates/150232.pem
	I1014 13:56:48.283013   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:56:48.283046   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:56:48.285808   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:56:48.286249   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:56:48.286279   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:56:48.286442   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:56:48.286648   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:56:48.286791   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:56:48.286909   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 13:56:48.366887   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1014 13:56:48.372822   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1014 13:56:48.386233   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1014 13:56:48.391254   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1014 13:56:48.402846   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1014 13:56:48.407460   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1014 13:56:48.418138   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1014 13:56:48.423366   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1014 13:56:48.435286   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1014 13:56:48.442980   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1014 13:56:48.457010   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1014 13:56:48.462031   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1014 13:56:48.475327   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 13:56:48.499553   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 13:56:48.526670   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 13:56:48.552105   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 13:56:48.577419   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1014 13:56:48.600650   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 13:56:48.623847   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 13:56:48.649170   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 13:56:48.674110   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 13:56:48.700598   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 13:56:48.725176   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 13:56:48.750067   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1014 13:56:48.767549   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1014 13:56:48.786866   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1014 13:56:48.804737   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1014 13:56:48.822022   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1014 13:56:48.840501   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1014 13:56:48.858556   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1014 13:56:48.875294   25306 ssh_runner.go:195] Run: openssl version
	I1014 13:56:48.880974   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 13:56:48.892080   25306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 13:56:48.896904   25306 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 13:56:48.896954   25306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 13:56:48.902856   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 13:56:48.914212   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 13:56:48.926784   25306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 13:56:48.931725   25306 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 13:56:48.931780   25306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 13:56:48.937633   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 13:56:48.949727   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 13:56:48.960604   25306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:56:48.965337   25306 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:56:48.965398   25306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:56:48.970965   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 13:56:48.983521   25306 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 13:56:48.987988   25306 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 13:56:48.988067   25306 kubeadm.go:934] updating node {m03 192.168.39.55 8443 v1.31.1 crio true true} ...
	I1014 13:56:48.988197   25306 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-450021-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.55
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 13:56:48.988224   25306 kube-vip.go:115] generating kube-vip config ...
	I1014 13:56:48.988260   25306 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1014 13:56:49.006786   25306 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1014 13:56:49.006878   25306 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1014 13:56:49.006948   25306 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 13:56:49.017177   25306 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1014 13:56:49.017231   25306 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1014 13:56:49.027546   25306 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1014 13:56:49.027571   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1014 13:56:49.027572   25306 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1014 13:56:49.027592   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1014 13:56:49.027633   25306 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1014 13:56:49.027546   25306 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1014 13:56:49.027650   25306 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1014 13:56:49.027677   25306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 13:56:49.041850   25306 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1014 13:56:49.041880   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1014 13:56:49.059453   25306 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1014 13:56:49.059469   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1014 13:56:49.059486   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1014 13:56:49.059574   25306 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1014 13:56:49.108836   25306 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1014 13:56:49.108879   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1014 13:56:49.922146   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1014 13:56:49.934057   25306 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1014 13:56:49.951495   25306 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 13:56:49.969831   25306 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1014 13:56:49.987375   25306 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1014 13:56:49.991392   25306 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 13:56:50.004437   25306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:56:50.138457   25306 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 13:56:50.156141   25306 host.go:66] Checking if "ha-450021" exists ...
	I1014 13:56:50.156664   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:56:50.156719   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:56:50.172505   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34963
	I1014 13:56:50.172984   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:56:50.173395   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:56:50.173421   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:56:50.173801   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:56:50.173992   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:56:50.174119   25306 start.go:317] joinCluster: &{Name:ha-450021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 13:56:50.174253   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1014 13:56:50.174270   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:56:50.177090   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:56:50.177620   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:56:50.177652   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:56:50.177788   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:56:50.177965   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:56:50.178111   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:56:50.178264   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 13:56:50.344835   25306 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 13:56:50.344884   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zud3yn.6rxrec6p5rmcwb5b --discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-450021-m03 --control-plane --apiserver-advertise-address=192.168.39.55 --apiserver-bind-port=8443"
	I1014 13:57:13.924825   25306 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zud3yn.6rxrec6p5rmcwb5b --discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-450021-m03 --control-plane --apiserver-advertise-address=192.168.39.55 --apiserver-bind-port=8443": (23.579918283s)
	I1014 13:57:13.924874   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1014 13:57:14.548857   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-450021-m03 minikube.k8s.io/updated_at=2024_10_14T13_57_14_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=ha-450021 minikube.k8s.io/primary=false
	I1014 13:57:14.695478   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-450021-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1014 13:57:14.877781   25306 start.go:319] duration metric: took 24.703657095s to joinCluster
	I1014 13:57:14.877880   25306 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 13:57:14.878165   25306 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:57:14.879747   25306 out.go:177] * Verifying Kubernetes components...
	I1014 13:57:14.881030   25306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:57:15.185770   25306 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 13:57:15.218461   25306 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 13:57:15.218911   25306 kapi.go:59] client config for ha-450021: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.crt", KeyFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.key", CAFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432aa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1014 13:57:15.218986   25306 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.176:8443
	I1014 13:57:15.219237   25306 node_ready.go:35] waiting up to 6m0s for node "ha-450021-m03" to be "Ready" ...
	I1014 13:57:15.219350   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:15.219360   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:15.219373   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:15.219378   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:15.231145   25306 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1014 13:57:15.719481   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:15.719504   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:15.719515   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:15.719523   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:15.723133   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:16.219449   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:16.219474   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:16.219486   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:16.219493   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:16.222753   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:16.719775   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:16.719794   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:16.719801   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:16.719805   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:16.723148   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:17.220337   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:17.220374   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:17.220382   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:17.220385   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:17.223796   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:17.224523   25306 node_ready.go:53] node "ha-450021-m03" has status "Ready":"False"
	I1014 13:57:17.719785   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:17.719812   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:17.719823   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:17.719828   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:17.724599   25306 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 13:57:18.219479   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:18.219497   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:18.219505   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:18.219510   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:18.222903   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:18.719939   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:18.719958   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:18.719964   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:18.719968   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:18.722786   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:57:19.220210   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:19.220235   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:19.220246   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:19.220251   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:19.223890   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:19.719936   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:19.719957   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:19.719965   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:19.719968   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:19.725873   25306 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 13:57:19.726613   25306 node_ready.go:53] node "ha-450021-m03" has status "Ready":"False"
	I1014 13:57:20.219399   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:20.219418   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:20.219426   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:20.219429   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:20.222447   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:20.720283   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:20.720304   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:20.720311   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:20.720316   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:20.723293   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:57:21.219622   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:21.219643   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:21.219651   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:21.219655   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:21.223137   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:21.719413   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:21.719434   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:21.719441   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:21.719445   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:21.727130   25306 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1014 13:57:21.728875   25306 node_ready.go:53] node "ha-450021-m03" has status "Ready":"False"
	I1014 13:57:22.219563   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:22.219584   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:22.219593   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:22.219597   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:22.222980   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:22.719873   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:22.719897   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:22.719906   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:22.719910   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:22.723538   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:23.219424   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:23.219447   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:23.219456   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:23.219459   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:23.223288   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:23.719840   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:23.719863   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:23.719870   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:23.719874   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:23.725306   25306 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 13:57:24.220401   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:24.220427   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:24.220439   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:24.220448   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:24.224025   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:24.224423   25306 node_ready.go:53] node "ha-450021-m03" has status "Ready":"False"
	I1014 13:57:24.720285   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:24.720311   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:24.720323   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:24.720331   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:24.724123   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:25.219820   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:25.219841   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:25.219849   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:25.219852   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:25.223237   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:25.720061   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:25.720081   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:25.720090   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:25.720095   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:25.727909   25306 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1014 13:57:26.220029   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:26.220052   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:26.220060   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:26.220065   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:26.223671   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:26.719549   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:26.719569   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:26.719577   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:26.719581   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:26.724063   25306 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 13:57:26.724628   25306 node_ready.go:53] node "ha-450021-m03" has status "Ready":"False"
	I1014 13:57:27.220196   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:27.220218   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:27.220230   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:27.220239   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:27.227906   25306 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1014 13:57:27.719535   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:27.719576   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:27.719587   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:27.719592   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:27.727292   25306 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1014 13:57:28.219952   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:28.219973   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:28.219983   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:28.219988   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:28.223688   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:28.719432   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:28.719455   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:28.719463   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:28.719468   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:28.722896   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:29.219877   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:29.219901   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.219911   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.219915   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.223129   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:29.223965   25306 node_ready.go:49] node "ha-450021-m03" has status "Ready":"True"
	I1014 13:57:29.223987   25306 node_ready.go:38] duration metric: took 14.004731761s for node "ha-450021-m03" to be "Ready" ...
	I1014 13:57:29.223998   25306 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 13:57:29.224060   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods
	I1014 13:57:29.224068   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.224075   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.224081   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.230054   25306 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 13:57:29.238333   25306 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-btfml" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.238422   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-btfml
	I1014 13:57:29.238435   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.238446   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.238455   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.242284   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:29.243174   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:29.243194   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.243204   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.243210   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.245933   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:57:29.246411   25306 pod_ready.go:93] pod "coredns-7c65d6cfc9-btfml" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:29.246431   25306 pod_ready.go:82] duration metric: took 8.073653ms for pod "coredns-7c65d6cfc9-btfml" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.246440   25306 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-h5s6h" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.246494   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-h5s6h
	I1014 13:57:29.246505   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.246515   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.246521   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.248883   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:57:29.249550   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:29.249563   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.249569   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.249573   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.251738   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:57:29.252240   25306 pod_ready.go:93] pod "coredns-7c65d6cfc9-h5s6h" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:29.252260   25306 pod_ready.go:82] duration metric: took 5.813932ms for pod "coredns-7c65d6cfc9-h5s6h" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.252268   25306 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.252312   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/etcd-ha-450021
	I1014 13:57:29.252319   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.252326   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.252330   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.254629   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:57:29.255222   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:29.255236   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.255243   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.255248   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.257432   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:57:29.257842   25306 pod_ready.go:93] pod "etcd-ha-450021" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:29.257858   25306 pod_ready.go:82] duration metric: took 5.5841ms for pod "etcd-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.257865   25306 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.257906   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/etcd-ha-450021-m02
	I1014 13:57:29.257913   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.257920   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.257926   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.260016   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:57:29.260730   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:57:29.260748   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.260759   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.260766   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.262822   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:57:29.263416   25306 pod_ready.go:93] pod "etcd-ha-450021-m02" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:29.263434   25306 pod_ready.go:82] duration metric: took 5.562613ms for pod "etcd-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.263445   25306 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-450021-m03" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.420814   25306 request.go:632] Waited for 157.302029ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/etcd-ha-450021-m03
	I1014 13:57:29.420888   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/etcd-ha-450021-m03
	I1014 13:57:29.420896   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.420904   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.420911   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.423933   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:29.620244   25306 request.go:632] Waited for 195.721406ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:29.620303   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:29.620309   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.620331   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.620359   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.623721   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:29.624232   25306 pod_ready.go:93] pod "etcd-ha-450021-m03" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:29.624248   25306 pod_ready.go:82] duration metric: took 360.793531ms for pod "etcd-ha-450021-m03" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.624265   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.820803   25306 request.go:632] Waited for 196.4673ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450021
	I1014 13:57:29.820871   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450021
	I1014 13:57:29.820878   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.820888   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.820899   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.825055   25306 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 13:57:30.020658   25306 request.go:632] Waited for 194.868544ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:30.020728   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:30.020733   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:30.020740   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:30.020744   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:30.024136   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:30.024766   25306 pod_ready.go:93] pod "kube-apiserver-ha-450021" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:30.024782   25306 pod_ready.go:82] duration metric: took 400.510119ms for pod "kube-apiserver-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:30.024791   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:30.220429   25306 request.go:632] Waited for 195.542568ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450021-m02
	I1014 13:57:30.220491   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450021-m02
	I1014 13:57:30.220497   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:30.220508   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:30.220517   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:30.224059   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:30.420172   25306 request.go:632] Waited for 195.340177ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:57:30.420225   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:57:30.420231   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:30.420238   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:30.420243   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:30.423967   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:30.424613   25306 pod_ready.go:93] pod "kube-apiserver-ha-450021-m02" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:30.424631   25306 pod_ready.go:82] duration metric: took 399.833776ms for pod "kube-apiserver-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:30.424640   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-450021-m03" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:30.620846   25306 request.go:632] Waited for 196.141352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450021-m03
	I1014 13:57:30.620922   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450021-m03
	I1014 13:57:30.620928   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:30.620935   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:30.620942   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:30.624496   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:30.820849   25306 request.go:632] Waited for 195.396807ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:30.820939   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:30.820975   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:30.820988   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:30.820995   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:30.824502   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:30.825021   25306 pod_ready.go:93] pod "kube-apiserver-ha-450021-m03" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:30.825046   25306 pod_ready.go:82] duration metric: took 400.398723ms for pod "kube-apiserver-ha-450021-m03" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:30.825059   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:31.020285   25306 request.go:632] Waited for 195.157008ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450021
	I1014 13:57:31.020365   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450021
	I1014 13:57:31.020370   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:31.020385   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:31.020393   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:31.024268   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:31.220585   25306 request.go:632] Waited for 195.341359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:31.220643   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:31.220650   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:31.220659   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:31.220664   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:31.224268   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:31.224942   25306 pod_ready.go:93] pod "kube-controller-manager-ha-450021" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:31.224972   25306 pod_ready.go:82] duration metric: took 399.90441ms for pod "kube-controller-manager-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:31.224991   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:31.419861   25306 request.go:632] Waited for 194.791136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450021-m02
	I1014 13:57:31.419920   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450021-m02
	I1014 13:57:31.419926   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:31.419934   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:31.419939   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:31.423671   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:31.620170   25306 request.go:632] Waited for 195.363598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:57:31.620257   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:57:31.620267   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:31.620279   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:31.620289   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:31.623838   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:31.624806   25306 pod_ready.go:93] pod "kube-controller-manager-ha-450021-m02" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:31.624830   25306 pod_ready.go:82] duration metric: took 399.825307ms for pod "kube-controller-manager-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:31.624845   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-450021-m03" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:31.819925   25306 request.go:632] Waited for 194.986166ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450021-m03
	I1014 13:57:31.819986   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450021-m03
	I1014 13:57:31.819995   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:31.820007   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:31.820020   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:31.823660   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:32.020870   25306 request.go:632] Waited for 196.217554ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:32.020953   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:32.020964   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:32.020976   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:32.020984   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:32.024484   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:32.025120   25306 pod_ready.go:93] pod "kube-controller-manager-ha-450021-m03" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:32.025154   25306 pod_ready.go:82] duration metric: took 400.297134ms for pod "kube-controller-manager-ha-450021-m03" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:32.025174   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9tbfp" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:32.220154   25306 request.go:632] Waited for 194.89867ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9tbfp
	I1014 13:57:32.220222   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9tbfp
	I1014 13:57:32.220229   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:32.220239   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:32.220246   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:32.223571   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:32.420701   25306 request.go:632] Waited for 196.352524ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:32.420758   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:32.420763   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:32.420770   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:32.420774   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:32.424213   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:32.424900   25306 pod_ready.go:93] pod "kube-proxy-9tbfp" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:32.424923   25306 pod_ready.go:82] duration metric: took 399.74019ms for pod "kube-proxy-9tbfp" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:32.424936   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dmbpv" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:32.619849   25306 request.go:632] Waited for 194.848954ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dmbpv
	I1014 13:57:32.619902   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dmbpv
	I1014 13:57:32.619908   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:32.619915   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:32.619918   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:32.623593   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:32.820780   25306 request.go:632] Waited for 196.366155ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:32.820849   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:32.820854   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:32.820863   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:32.820870   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:32.824510   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:32.825180   25306 pod_ready.go:93] pod "kube-proxy-dmbpv" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:32.825196   25306 pod_ready.go:82] duration metric: took 400.2529ms for pod "kube-proxy-dmbpv" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:32.825205   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-v24tf" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:33.020309   25306 request.go:632] Waited for 195.030338ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v24tf
	I1014 13:57:33.020398   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v24tf
	I1014 13:57:33.020409   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:33.020421   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:33.020429   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:33.023944   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:33.220873   25306 request.go:632] Waited for 196.168894ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:57:33.220972   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:57:33.220984   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:33.221002   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:33.221010   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:33.224398   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:33.225139   25306 pod_ready.go:93] pod "kube-proxy-v24tf" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:33.225161   25306 pod_ready.go:82] duration metric: took 399.9482ms for pod "kube-proxy-v24tf" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:33.225174   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:33.420278   25306 request.go:632] Waited for 195.028059ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450021
	I1014 13:57:33.420352   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450021
	I1014 13:57:33.420358   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:33.420365   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:33.420370   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:33.423970   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:33.619940   25306 request.go:632] Waited for 195.292135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:33.620017   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:33.620024   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:33.620031   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:33.620038   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:33.623628   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:33.624429   25306 pod_ready.go:93] pod "kube-scheduler-ha-450021" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:33.624446   25306 pod_ready.go:82] duration metric: took 399.265054ms for pod "kube-scheduler-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:33.624456   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:33.820766   25306 request.go:632] Waited for 196.250065ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450021-m02
	I1014 13:57:33.820834   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450021-m02
	I1014 13:57:33.820840   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:33.820847   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:33.820861   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:33.824813   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:34.020844   25306 request.go:632] Waited for 195.391993ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:57:34.020901   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:57:34.020908   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:34.020915   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:34.020920   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:34.025139   25306 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 13:57:34.026105   25306 pod_ready.go:93] pod "kube-scheduler-ha-450021-m02" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:34.026127   25306 pod_ready.go:82] duration metric: took 401.663759ms for pod "kube-scheduler-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:34.026140   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-450021-m03" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:34.220315   25306 request.go:632] Waited for 194.095801ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450021-m03
	I1014 13:57:34.220368   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450021-m03
	I1014 13:57:34.220374   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:34.220381   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:34.220385   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:34.224012   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:34.420204   25306 request.go:632] Waited for 195.373756ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:34.420275   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:34.420280   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:34.420288   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:34.420292   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:34.424022   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:34.424779   25306 pod_ready.go:93] pod "kube-scheduler-ha-450021-m03" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:34.424801   25306 pod_ready.go:82] duration metric: took 398.654013ms for pod "kube-scheduler-ha-450021-m03" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:34.424816   25306 pod_ready.go:39] duration metric: took 5.200801864s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 13:57:34.424833   25306 api_server.go:52] waiting for apiserver process to appear ...
	I1014 13:57:34.424888   25306 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 13:57:34.443450   25306 api_server.go:72] duration metric: took 19.56551851s to wait for apiserver process to appear ...
	I1014 13:57:34.443480   25306 api_server.go:88] waiting for apiserver healthz status ...
	I1014 13:57:34.443507   25306 api_server.go:253] Checking apiserver healthz at https://192.168.39.176:8443/healthz ...
	I1014 13:57:34.447984   25306 api_server.go:279] https://192.168.39.176:8443/healthz returned 200:
	ok
	I1014 13:57:34.448076   25306 round_trippers.go:463] GET https://192.168.39.176:8443/version
	I1014 13:57:34.448089   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:34.448100   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:34.448108   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:34.449007   25306 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1014 13:57:34.449084   25306 api_server.go:141] control plane version: v1.31.1
	I1014 13:57:34.449104   25306 api_server.go:131] duration metric: took 5.616812ms to wait for apiserver health ...
	I1014 13:57:34.449115   25306 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 13:57:34.620303   25306 request.go:632] Waited for 171.103547ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods
	I1014 13:57:34.620363   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods
	I1014 13:57:34.620370   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:34.620380   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:34.620385   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:34.626531   25306 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 13:57:34.632849   25306 system_pods.go:59] 24 kube-system pods found
	I1014 13:57:34.632878   25306 system_pods.go:61] "coredns-7c65d6cfc9-btfml" [292e08ef-5eec-4ebb-acf5-5b4b03e47724] Running
	I1014 13:57:34.632883   25306 system_pods.go:61] "coredns-7c65d6cfc9-h5s6h" [bf78614c-8f22-48f9-8a16-cfcffecadfcc] Running
	I1014 13:57:34.632887   25306 system_pods.go:61] "etcd-ha-450021" [d3e4a252-6d4a-4617-99f8-416ddaa8e695] Running
	I1014 13:57:34.632891   25306 system_pods.go:61] "etcd-ha-450021-m02" [d890c5b4-c756-42a4-a549-59b46d9fa0f6] Running
	I1014 13:57:34.632894   25306 system_pods.go:61] "etcd-ha-450021-m03" [ceded083-0662-41fd-9317-3f7debf0252b] Running
	I1014 13:57:34.632897   25306 system_pods.go:61] "kindnet-2ghzc" [f725a811-6a0e-433c-913d-079b7bc4742f] Running
	I1014 13:57:34.632900   25306 system_pods.go:61] "kindnet-7jwgx" [c4607bd9-32b8-401b-a74e-b20d6f63ce03] Running
	I1014 13:57:34.632903   25306 system_pods.go:61] "kindnet-c2xkn" [0f821123-80f9-4fe5-b64c-fb641ec185ea] Running
	I1014 13:57:34.632906   25306 system_pods.go:61] "kube-apiserver-ha-450021" [3c355a29-9ac5-466a-974f-22fc58429b98] Running
	I1014 13:57:34.632909   25306 system_pods.go:61] "kube-apiserver-ha-450021-m02" [5e9f016e-2b42-4301-964f-8e2af49d0d08] Running
	I1014 13:57:34.632911   25306 system_pods.go:61] "kube-apiserver-ha-450021-m03" [3521d4f5-b657-4f3c-a36e-a855d81590e9] Running
	I1014 13:57:34.632915   25306 system_pods.go:61] "kube-controller-manager-ha-450021" [b002ddcb-0bb2-44f5-a779-20df99f3cab5] Running
	I1014 13:57:34.632917   25306 system_pods.go:61] "kube-controller-manager-ha-450021-m02" [f7be35b1-380c-4f40-a1d6-5522b961917c] Running
	I1014 13:57:34.632920   25306 system_pods.go:61] "kube-controller-manager-ha-450021-m03" [56960cdf-61e7-4251-8fa5-7034b7aeffcd] Running
	I1014 13:57:34.632923   25306 system_pods.go:61] "kube-proxy-9tbfp" [fc30758d-16af-4818-9414-e78ee865fb7d] Running
	I1014 13:57:34.632926   25306 system_pods.go:61] "kube-proxy-dmbpv" [e09737a1-c663-4951-b6cb-c0690fbd8153] Running
	I1014 13:57:34.632929   25306 system_pods.go:61] "kube-proxy-v24tf" [49b626fc-4017-45f7-a44f-43f3b311d0e0] Running
	I1014 13:57:34.632931   25306 system_pods.go:61] "kube-scheduler-ha-450021" [2f216272-b604-4f1c-ad4b-fdb874a78cf4] Running
	I1014 13:57:34.632934   25306 system_pods.go:61] "kube-scheduler-ha-450021-m02" [cfa4bb4e-6a32-4b4b-85df-2c7b1a356a4a] Running
	I1014 13:57:34.632937   25306 system_pods.go:61] "kube-scheduler-ha-450021-m03" [11cfe784-95d9-48fb-ab0c-334d4136c207] Running
	I1014 13:57:34.632940   25306 system_pods.go:61] "kube-vip-ha-450021" [e5340482-7ea5-4299-8096-a2f292c4bfdd] Running
	I1014 13:57:34.632942   25306 system_pods.go:61] "kube-vip-ha-450021-m02" [6a409d8d-9566-4caa-af5a-0dbe7b9f6cec] Running
	I1014 13:57:34.632946   25306 system_pods.go:61] "kube-vip-ha-450021-m03" [de6e64e3-5d83-4ca7-8618-279cca6bf0c1] Running
	I1014 13:57:34.632948   25306 system_pods.go:61] "storage-provisioner" [1377adb3-3faf-4dee-a86e-9c4544a02d51] Running
	I1014 13:57:34.632953   25306 system_pods.go:74] duration metric: took 183.830824ms to wait for pod list to return data ...
	I1014 13:57:34.632963   25306 default_sa.go:34] waiting for default service account to be created ...
	I1014 13:57:34.820472   25306 request.go:632] Waited for 187.441614ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/default/serviceaccounts
	I1014 13:57:34.820540   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/default/serviceaccounts
	I1014 13:57:34.820546   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:34.820553   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:34.820563   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:34.824880   25306 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 13:57:34.824982   25306 default_sa.go:45] found service account: "default"
	I1014 13:57:34.824994   25306 default_sa.go:55] duration metric: took 192.026288ms for default service account to be created ...
	I1014 13:57:34.825002   25306 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 13:57:35.020105   25306 request.go:632] Waited for 195.031126ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods
	I1014 13:57:35.020178   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods
	I1014 13:57:35.020187   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:35.020199   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:35.020209   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:35.026365   25306 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 13:57:35.032685   25306 system_pods.go:86] 24 kube-system pods found
	I1014 13:57:35.032713   25306 system_pods.go:89] "coredns-7c65d6cfc9-btfml" [292e08ef-5eec-4ebb-acf5-5b4b03e47724] Running
	I1014 13:57:35.032719   25306 system_pods.go:89] "coredns-7c65d6cfc9-h5s6h" [bf78614c-8f22-48f9-8a16-cfcffecadfcc] Running
	I1014 13:57:35.032722   25306 system_pods.go:89] "etcd-ha-450021" [d3e4a252-6d4a-4617-99f8-416ddaa8e695] Running
	I1014 13:57:35.032727   25306 system_pods.go:89] "etcd-ha-450021-m02" [d890c5b4-c756-42a4-a549-59b46d9fa0f6] Running
	I1014 13:57:35.032731   25306 system_pods.go:89] "etcd-ha-450021-m03" [ceded083-0662-41fd-9317-3f7debf0252b] Running
	I1014 13:57:35.032736   25306 system_pods.go:89] "kindnet-2ghzc" [f725a811-6a0e-433c-913d-079b7bc4742f] Running
	I1014 13:57:35.032739   25306 system_pods.go:89] "kindnet-7jwgx" [c4607bd9-32b8-401b-a74e-b20d6f63ce03] Running
	I1014 13:57:35.032743   25306 system_pods.go:89] "kindnet-c2xkn" [0f821123-80f9-4fe5-b64c-fb641ec185ea] Running
	I1014 13:57:35.032747   25306 system_pods.go:89] "kube-apiserver-ha-450021" [3c355a29-9ac5-466a-974f-22fc58429b98] Running
	I1014 13:57:35.032751   25306 system_pods.go:89] "kube-apiserver-ha-450021-m02" [5e9f016e-2b42-4301-964f-8e2af49d0d08] Running
	I1014 13:57:35.032754   25306 system_pods.go:89] "kube-apiserver-ha-450021-m03" [3521d4f5-b657-4f3c-a36e-a855d81590e9] Running
	I1014 13:57:35.032758   25306 system_pods.go:89] "kube-controller-manager-ha-450021" [b002ddcb-0bb2-44f5-a779-20df99f3cab5] Running
	I1014 13:57:35.032763   25306 system_pods.go:89] "kube-controller-manager-ha-450021-m02" [f7be35b1-380c-4f40-a1d6-5522b961917c] Running
	I1014 13:57:35.032770   25306 system_pods.go:89] "kube-controller-manager-ha-450021-m03" [56960cdf-61e7-4251-8fa5-7034b7aeffcd] Running
	I1014 13:57:35.032774   25306 system_pods.go:89] "kube-proxy-9tbfp" [fc30758d-16af-4818-9414-e78ee865fb7d] Running
	I1014 13:57:35.032780   25306 system_pods.go:89] "kube-proxy-dmbpv" [e09737a1-c663-4951-b6cb-c0690fbd8153] Running
	I1014 13:57:35.032783   25306 system_pods.go:89] "kube-proxy-v24tf" [49b626fc-4017-45f7-a44f-43f3b311d0e0] Running
	I1014 13:57:35.032789   25306 system_pods.go:89] "kube-scheduler-ha-450021" [2f216272-b604-4f1c-ad4b-fdb874a78cf4] Running
	I1014 13:57:35.032793   25306 system_pods.go:89] "kube-scheduler-ha-450021-m02" [cfa4bb4e-6a32-4b4b-85df-2c7b1a356a4a] Running
	I1014 13:57:35.032799   25306 system_pods.go:89] "kube-scheduler-ha-450021-m03" [11cfe784-95d9-48fb-ab0c-334d4136c207] Running
	I1014 13:57:35.032803   25306 system_pods.go:89] "kube-vip-ha-450021" [e5340482-7ea5-4299-8096-a2f292c4bfdd] Running
	I1014 13:57:35.032808   25306 system_pods.go:89] "kube-vip-ha-450021-m02" [6a409d8d-9566-4caa-af5a-0dbe7b9f6cec] Running
	I1014 13:57:35.032811   25306 system_pods.go:89] "kube-vip-ha-450021-m03" [de6e64e3-5d83-4ca7-8618-279cca6bf0c1] Running
	I1014 13:57:35.032816   25306 system_pods.go:89] "storage-provisioner" [1377adb3-3faf-4dee-a86e-9c4544a02d51] Running
	I1014 13:57:35.032822   25306 system_pods.go:126] duration metric: took 207.815391ms to wait for k8s-apps to be running ...
	I1014 13:57:35.032831   25306 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 13:57:35.032872   25306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 13:57:35.048661   25306 system_svc.go:56] duration metric: took 15.819815ms WaitForService to wait for kubelet
	I1014 13:57:35.048694   25306 kubeadm.go:582] duration metric: took 20.170783435s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 13:57:35.048713   25306 node_conditions.go:102] verifying NodePressure condition ...
	I1014 13:57:35.220270   25306 request.go:632] Waited for 171.481631ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes
	I1014 13:57:35.220338   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes
	I1014 13:57:35.220343   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:35.220351   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:35.220356   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:35.224271   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:35.225220   25306 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 13:57:35.225243   25306 node_conditions.go:123] node cpu capacity is 2
	I1014 13:57:35.225255   25306 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 13:57:35.225258   25306 node_conditions.go:123] node cpu capacity is 2
	I1014 13:57:35.225264   25306 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 13:57:35.225268   25306 node_conditions.go:123] node cpu capacity is 2
	I1014 13:57:35.225272   25306 node_conditions.go:105] duration metric: took 176.55497ms to run NodePressure ...
	I1014 13:57:35.225286   25306 start.go:241] waiting for startup goroutines ...
	I1014 13:57:35.225306   25306 start.go:255] writing updated cluster config ...
	I1014 13:57:35.225629   25306 ssh_runner.go:195] Run: rm -f paused
	I1014 13:57:35.278941   25306 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1014 13:57:35.281235   25306 out.go:177] * Done! kubectl is now configured to use "ha-450021" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 14 14:01:25 ha-450021 crio[655]: time="2024-10-14 14:01:25.809315490Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914485809279848,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f3487287-195a-466e-87ec-f0ec7a02b08a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 14:01:25 ha-450021 crio[655]: time="2024-10-14 14:01:25.810343778Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0967773c-ebed-4e95-a62d-bd546214747c name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:25 ha-450021 crio[655]: time="2024-10-14 14:01:25.810415131Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0967773c-ebed-4e95-a62d-bd546214747c name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:25 ha-450021 crio[655]: time="2024-10-14 14:01:25.811110562Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a41053c31fcb74ad24a4417c885436510a42c2e477d721651ae65459748bfd17,PodSandboxId:c3201918bd10d1535ddb2ebef0aa3b55e3e997e18a90de29ee09c2a7cb289b47,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728914259057513833,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fkz82,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07dccd61-4a5a-4d82-ba70-df7e6ff6bb4c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1051cfacf1c9fba1500a3437ece4de024c0fac626340151d2e28cbc18dc67a85,PodSandboxId:49d4b2387dd65dbd67bcdc3c377ba15e05400c782a4e2980358881a9c87ca5f3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728914119581188349,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1377adb3-3faf-4dee-a86e-9c4544a02d51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b17b6d38f935951dfa1746d02ec45095af8e06f6258ed80913feba7a10224927,PodSandboxId:b83407d74496b7f16cdeead48267cc803ffacd743feae034b1233a8c93800582,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728914119554752984,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-btfml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292e08ef-5eec-4ebb-acf5-5b4b03e47724,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:138a0b23a09075071550a4b7808439fd0baef4054fc6a7a7d4e8bc9a4457abfe,PodSandboxId:e862ae5ec13c39ac9605ac5725a1018466957149e1a69b2e013f7a87d5095bee,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728914119562072468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h5s6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf78614c-8f
22-48f9-8a16-cfcffecadfcc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15af89d835eebb58d825b5cdfdcbcfc064fe27d95caa6667adfb0e714974996,PodSandboxId:10ad22ab64de39acac4028e06deccb0ee0084112ba58c2349599913bf0d931d6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728914107455260455,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-c2xkn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f821123-80f9-4fe5-b64c-fb641ec185ea,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eec863af38c114b5058f678da27f8ce8608a5cd97566d4e704e07ff87100124,PodSandboxId:40a3318e89ae5bc2fe2d145b32f19e419934ba96586add9c17a653799fad9d26,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172891410
4698984942,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmbpv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e09737a1-c663-4951-b6cb-c0690fbd8153,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69f6cdf690df6514a349ce87c438a718209e9a098486e719653e5ac84d645899,PodSandboxId:dcc284c053db656af8f5da1c1a80672bfee0353e44ea6e4a01814f37351dad87,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17289140950
79963768,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67c899a1266c35ae5a8a71fac8e2760,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4efae268f9ec331abbf180a9264d60144b2a22485b89d39a46207f1c40454221,PodSandboxId:ce558cb07ca8f68689235cad5912b7da5a8f1c75775d2e5f2e7e823fe5127da9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728914093274186361,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d575d608bbdadce4a654f35576809ec,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fbfff3b334bde93db2f81855492434f8be70767826f2e33734ab52ad522a7a,PodSandboxId:ee3335073bb66b262b3eabf6a735be75c2ddcef2fa54aff9245585e26dd713f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728914093280862312,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca49fb553a9c26ea8ae634afb933e7b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ebec97dfd405a7e2c8ad77d0255ca029054cfb1090eba8d4d3851bdb68213e1,PodSandboxId:bc7fe679de4dc3fdff7f7e05bcd59ce354148a5c261197612bf284921530e902,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728914093233135044,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d8c37c1aa9e38ec5865c9c3159f1b5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942c179e591a9c0a8a1d869cfc5456dcbfb37c78056f256b241c51aab8936a3e,PodSandboxId:efaae5865d8afa77d2901173ba9c38ea901ca40f040d82cc15e889b37ff5a83c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728914093143514748,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c293b9606d38e94bf353b2714c2a069,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0967773c-ebed-4e95-a62d-bd546214747c name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:25 ha-450021 crio[655]: time="2024-10-14 14:01:25.863783717Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3e809fa9-61ba-4eb8-a0d4-cf44c2179d86 name=/runtime.v1.RuntimeService/Version
	Oct 14 14:01:25 ha-450021 crio[655]: time="2024-10-14 14:01:25.863913317Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3e809fa9-61ba-4eb8-a0d4-cf44c2179d86 name=/runtime.v1.RuntimeService/Version
	Oct 14 14:01:25 ha-450021 crio[655]: time="2024-10-14 14:01:25.865712597Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=44f61d41-324d-4c50-bea3-7d5027818950 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 14:01:25 ha-450021 crio[655]: time="2024-10-14 14:01:25.866321810Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914485866284192,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=44f61d41-324d-4c50-bea3-7d5027818950 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 14:01:25 ha-450021 crio[655]: time="2024-10-14 14:01:25.867016534Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8af3038a-fb5c-4d31-9c35-ac5585d13877 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:25 ha-450021 crio[655]: time="2024-10-14 14:01:25.867067729Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8af3038a-fb5c-4d31-9c35-ac5585d13877 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:25 ha-450021 crio[655]: time="2024-10-14 14:01:25.867297868Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a41053c31fcb74ad24a4417c885436510a42c2e477d721651ae65459748bfd17,PodSandboxId:c3201918bd10d1535ddb2ebef0aa3b55e3e997e18a90de29ee09c2a7cb289b47,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728914259057513833,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fkz82,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07dccd61-4a5a-4d82-ba70-df7e6ff6bb4c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1051cfacf1c9fba1500a3437ece4de024c0fac626340151d2e28cbc18dc67a85,PodSandboxId:49d4b2387dd65dbd67bcdc3c377ba15e05400c782a4e2980358881a9c87ca5f3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728914119581188349,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1377adb3-3faf-4dee-a86e-9c4544a02d51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b17b6d38f935951dfa1746d02ec45095af8e06f6258ed80913feba7a10224927,PodSandboxId:b83407d74496b7f16cdeead48267cc803ffacd743feae034b1233a8c93800582,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728914119554752984,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-btfml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292e08ef-5eec-4ebb-acf5-5b4b03e47724,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:138a0b23a09075071550a4b7808439fd0baef4054fc6a7a7d4e8bc9a4457abfe,PodSandboxId:e862ae5ec13c39ac9605ac5725a1018466957149e1a69b2e013f7a87d5095bee,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728914119562072468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h5s6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf78614c-8f
22-48f9-8a16-cfcffecadfcc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15af89d835eebb58d825b5cdfdcbcfc064fe27d95caa6667adfb0e714974996,PodSandboxId:10ad22ab64de39acac4028e06deccb0ee0084112ba58c2349599913bf0d931d6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728914107455260455,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-c2xkn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f821123-80f9-4fe5-b64c-fb641ec185ea,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eec863af38c114b5058f678da27f8ce8608a5cd97566d4e704e07ff87100124,PodSandboxId:40a3318e89ae5bc2fe2d145b32f19e419934ba96586add9c17a653799fad9d26,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172891410
4698984942,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmbpv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e09737a1-c663-4951-b6cb-c0690fbd8153,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69f6cdf690df6514a349ce87c438a718209e9a098486e719653e5ac84d645899,PodSandboxId:dcc284c053db656af8f5da1c1a80672bfee0353e44ea6e4a01814f37351dad87,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17289140950
79963768,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67c899a1266c35ae5a8a71fac8e2760,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4efae268f9ec331abbf180a9264d60144b2a22485b89d39a46207f1c40454221,PodSandboxId:ce558cb07ca8f68689235cad5912b7da5a8f1c75775d2e5f2e7e823fe5127da9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728914093274186361,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d575d608bbdadce4a654f35576809ec,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fbfff3b334bde93db2f81855492434f8be70767826f2e33734ab52ad522a7a,PodSandboxId:ee3335073bb66b262b3eabf6a735be75c2ddcef2fa54aff9245585e26dd713f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728914093280862312,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca49fb553a9c26ea8ae634afb933e7b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ebec97dfd405a7e2c8ad77d0255ca029054cfb1090eba8d4d3851bdb68213e1,PodSandboxId:bc7fe679de4dc3fdff7f7e05bcd59ce354148a5c261197612bf284921530e902,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728914093233135044,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d8c37c1aa9e38ec5865c9c3159f1b5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942c179e591a9c0a8a1d869cfc5456dcbfb37c78056f256b241c51aab8936a3e,PodSandboxId:efaae5865d8afa77d2901173ba9c38ea901ca40f040d82cc15e889b37ff5a83c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728914093143514748,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c293b9606d38e94bf353b2714c2a069,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8af3038a-fb5c-4d31-9c35-ac5585d13877 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:25 ha-450021 crio[655]: time="2024-10-14 14:01:25.906920029Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9508df02-e04d-46fa-9ec7-488fe6bb6b06 name=/runtime.v1.RuntimeService/Version
	Oct 14 14:01:25 ha-450021 crio[655]: time="2024-10-14 14:01:25.907021157Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9508df02-e04d-46fa-9ec7-488fe6bb6b06 name=/runtime.v1.RuntimeService/Version
	Oct 14 14:01:25 ha-450021 crio[655]: time="2024-10-14 14:01:25.908276310Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b8bdd004-6313-4318-9238-4ce8fa604d39 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 14:01:25 ha-450021 crio[655]: time="2024-10-14 14:01:25.908965112Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914485908939076,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b8bdd004-6313-4318-9238-4ce8fa604d39 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 14:01:25 ha-450021 crio[655]: time="2024-10-14 14:01:25.909509312Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=65a910fc-eb7f-42f9-8daf-d3098e848dd1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:25 ha-450021 crio[655]: time="2024-10-14 14:01:25.909617987Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=65a910fc-eb7f-42f9-8daf-d3098e848dd1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:25 ha-450021 crio[655]: time="2024-10-14 14:01:25.909845454Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a41053c31fcb74ad24a4417c885436510a42c2e477d721651ae65459748bfd17,PodSandboxId:c3201918bd10d1535ddb2ebef0aa3b55e3e997e18a90de29ee09c2a7cb289b47,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728914259057513833,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fkz82,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07dccd61-4a5a-4d82-ba70-df7e6ff6bb4c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1051cfacf1c9fba1500a3437ece4de024c0fac626340151d2e28cbc18dc67a85,PodSandboxId:49d4b2387dd65dbd67bcdc3c377ba15e05400c782a4e2980358881a9c87ca5f3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728914119581188349,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1377adb3-3faf-4dee-a86e-9c4544a02d51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b17b6d38f935951dfa1746d02ec45095af8e06f6258ed80913feba7a10224927,PodSandboxId:b83407d74496b7f16cdeead48267cc803ffacd743feae034b1233a8c93800582,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728914119554752984,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-btfml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292e08ef-5eec-4ebb-acf5-5b4b03e47724,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:138a0b23a09075071550a4b7808439fd0baef4054fc6a7a7d4e8bc9a4457abfe,PodSandboxId:e862ae5ec13c39ac9605ac5725a1018466957149e1a69b2e013f7a87d5095bee,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728914119562072468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h5s6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf78614c-8f
22-48f9-8a16-cfcffecadfcc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15af89d835eebb58d825b5cdfdcbcfc064fe27d95caa6667adfb0e714974996,PodSandboxId:10ad22ab64de39acac4028e06deccb0ee0084112ba58c2349599913bf0d931d6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728914107455260455,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-c2xkn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f821123-80f9-4fe5-b64c-fb641ec185ea,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eec863af38c114b5058f678da27f8ce8608a5cd97566d4e704e07ff87100124,PodSandboxId:40a3318e89ae5bc2fe2d145b32f19e419934ba96586add9c17a653799fad9d26,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172891410
4698984942,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmbpv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e09737a1-c663-4951-b6cb-c0690fbd8153,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69f6cdf690df6514a349ce87c438a718209e9a098486e719653e5ac84d645899,PodSandboxId:dcc284c053db656af8f5da1c1a80672bfee0353e44ea6e4a01814f37351dad87,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17289140950
79963768,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67c899a1266c35ae5a8a71fac8e2760,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4efae268f9ec331abbf180a9264d60144b2a22485b89d39a46207f1c40454221,PodSandboxId:ce558cb07ca8f68689235cad5912b7da5a8f1c75775d2e5f2e7e823fe5127da9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728914093274186361,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d575d608bbdadce4a654f35576809ec,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fbfff3b334bde93db2f81855492434f8be70767826f2e33734ab52ad522a7a,PodSandboxId:ee3335073bb66b262b3eabf6a735be75c2ddcef2fa54aff9245585e26dd713f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728914093280862312,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca49fb553a9c26ea8ae634afb933e7b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ebec97dfd405a7e2c8ad77d0255ca029054cfb1090eba8d4d3851bdb68213e1,PodSandboxId:bc7fe679de4dc3fdff7f7e05bcd59ce354148a5c261197612bf284921530e902,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728914093233135044,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d8c37c1aa9e38ec5865c9c3159f1b5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942c179e591a9c0a8a1d869cfc5456dcbfb37c78056f256b241c51aab8936a3e,PodSandboxId:efaae5865d8afa77d2901173ba9c38ea901ca40f040d82cc15e889b37ff5a83c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728914093143514748,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c293b9606d38e94bf353b2714c2a069,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=65a910fc-eb7f-42f9-8daf-d3098e848dd1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:25 ha-450021 crio[655]: time="2024-10-14 14:01:25.952474419Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f255403e-4bab-4b0e-ad16-0d15850030d5 name=/runtime.v1.RuntimeService/Version
	Oct 14 14:01:25 ha-450021 crio[655]: time="2024-10-14 14:01:25.952543752Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f255403e-4bab-4b0e-ad16-0d15850030d5 name=/runtime.v1.RuntimeService/Version
	Oct 14 14:01:25 ha-450021 crio[655]: time="2024-10-14 14:01:25.953851749Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8a6116d4-22b8-4d5d-bceb-cee486ad544d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 14:01:25 ha-450021 crio[655]: time="2024-10-14 14:01:25.954337781Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914485954314769,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8a6116d4-22b8-4d5d-bceb-cee486ad544d name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 14:01:25 ha-450021 crio[655]: time="2024-10-14 14:01:25.955181185Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=11dc2922-6b5a-4df4-b541-5ecd126b776a name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:25 ha-450021 crio[655]: time="2024-10-14 14:01:25.955239032Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=11dc2922-6b5a-4df4-b541-5ecd126b776a name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:25 ha-450021 crio[655]: time="2024-10-14 14:01:25.955468418Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a41053c31fcb74ad24a4417c885436510a42c2e477d721651ae65459748bfd17,PodSandboxId:c3201918bd10d1535ddb2ebef0aa3b55e3e997e18a90de29ee09c2a7cb289b47,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728914259057513833,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fkz82,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07dccd61-4a5a-4d82-ba70-df7e6ff6bb4c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1051cfacf1c9fba1500a3437ece4de024c0fac626340151d2e28cbc18dc67a85,PodSandboxId:49d4b2387dd65dbd67bcdc3c377ba15e05400c782a4e2980358881a9c87ca5f3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728914119581188349,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1377adb3-3faf-4dee-a86e-9c4544a02d51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b17b6d38f935951dfa1746d02ec45095af8e06f6258ed80913feba7a10224927,PodSandboxId:b83407d74496b7f16cdeead48267cc803ffacd743feae034b1233a8c93800582,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728914119554752984,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-btfml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292e08ef-5eec-4ebb-acf5-5b4b03e47724,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:138a0b23a09075071550a4b7808439fd0baef4054fc6a7a7d4e8bc9a4457abfe,PodSandboxId:e862ae5ec13c39ac9605ac5725a1018466957149e1a69b2e013f7a87d5095bee,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728914119562072468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h5s6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf78614c-8f
22-48f9-8a16-cfcffecadfcc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15af89d835eebb58d825b5cdfdcbcfc064fe27d95caa6667adfb0e714974996,PodSandboxId:10ad22ab64de39acac4028e06deccb0ee0084112ba58c2349599913bf0d931d6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728914107455260455,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-c2xkn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f821123-80f9-4fe5-b64c-fb641ec185ea,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eec863af38c114b5058f678da27f8ce8608a5cd97566d4e704e07ff87100124,PodSandboxId:40a3318e89ae5bc2fe2d145b32f19e419934ba96586add9c17a653799fad9d26,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172891410
4698984942,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmbpv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e09737a1-c663-4951-b6cb-c0690fbd8153,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69f6cdf690df6514a349ce87c438a718209e9a098486e719653e5ac84d645899,PodSandboxId:dcc284c053db656af8f5da1c1a80672bfee0353e44ea6e4a01814f37351dad87,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17289140950
79963768,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67c899a1266c35ae5a8a71fac8e2760,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4efae268f9ec331abbf180a9264d60144b2a22485b89d39a46207f1c40454221,PodSandboxId:ce558cb07ca8f68689235cad5912b7da5a8f1c75775d2e5f2e7e823fe5127da9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728914093274186361,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d575d608bbdadce4a654f35576809ec,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fbfff3b334bde93db2f81855492434f8be70767826f2e33734ab52ad522a7a,PodSandboxId:ee3335073bb66b262b3eabf6a735be75c2ddcef2fa54aff9245585e26dd713f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728914093280862312,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca49fb553a9c26ea8ae634afb933e7b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ebec97dfd405a7e2c8ad77d0255ca029054cfb1090eba8d4d3851bdb68213e1,PodSandboxId:bc7fe679de4dc3fdff7f7e05bcd59ce354148a5c261197612bf284921530e902,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728914093233135044,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d8c37c1aa9e38ec5865c9c3159f1b5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942c179e591a9c0a8a1d869cfc5456dcbfb37c78056f256b241c51aab8936a3e,PodSandboxId:efaae5865d8afa77d2901173ba9c38ea901ca40f040d82cc15e889b37ff5a83c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728914093143514748,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c293b9606d38e94bf353b2714c2a069,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=11dc2922-6b5a-4df4-b541-5ecd126b776a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a41053c31fcb7       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   c3201918bd10d       busybox-7dff88458-fkz82
	1051cfacf1c9f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   49d4b2387dd65       storage-provisioner
	138a0b23a0907       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   e862ae5ec13c3       coredns-7c65d6cfc9-h5s6h
	b17b6d38f9359       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   b83407d74496b       coredns-7c65d6cfc9-btfml
	b15af89d835ee       docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387    6 minutes ago       Running             kindnet-cni               0                   10ad22ab64de3       kindnet-c2xkn
	5eec863af38c1       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   40a3318e89ae5       kube-proxy-dmbpv
	69f6cdf690df6       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   dcc284c053db6       kube-vip-ha-450021
	09fbfff3b334b       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   ee3335073bb66       kube-controller-manager-ha-450021
	4efae268f9ec3       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   ce558cb07ca8f       kube-scheduler-ha-450021
	6ebec97dfd405       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   bc7fe679de4dc       etcd-ha-450021
	942c179e591a9       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   efaae5865d8af       kube-apiserver-ha-450021
	
	
	==> coredns [138a0b23a09075071550a4b7808439fd0baef4054fc6a7a7d4e8bc9a4457abfe] <==
	[INFO] 10.244.1.2:43382 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000121511s
	[INFO] 10.244.1.2:47675 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001762532s
	[INFO] 10.244.0.4:45515 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000083904s
	[INFO] 10.244.0.4:48451 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000149827s
	[INFO] 10.244.0.4:36014 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00015272s
	[INFO] 10.244.2.2:40959 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000194596s
	[INFO] 10.244.2.2:44151 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000212714s
	[INFO] 10.244.2.2:55911 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089682s
	[INFO] 10.244.1.2:47272 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001299918s
	[INFO] 10.244.1.2:44591 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078031s
	[INFO] 10.244.1.2:37471 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072637s
	[INFO] 10.244.0.4:52930 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152779s
	[INFO] 10.244.0.4:33266 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005592s
	[INFO] 10.244.2.2:36389 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000275257s
	[INFO] 10.244.2.2:43232 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010928s
	[INFO] 10.244.2.2:38102 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092762s
	[INFO] 10.244.1.2:55403 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000222145s
	[INFO] 10.244.1.2:52540 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102916s
	[INFO] 10.244.0.4:54154 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135993s
	[INFO] 10.244.0.4:36974 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000196993s
	[INFO] 10.244.0.4:54725 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000084888s
	[INFO] 10.244.2.2:57068 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000174437s
	[INFO] 10.244.1.2:46234 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191287s
	[INFO] 10.244.1.2:39695 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000080939s
	[INFO] 10.244.1.2:36634 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000064427s
	
	
	==> coredns [b17b6d38f935951dfa1746d02ec45095af8e06f6258ed80913feba7a10224927] <==
	[INFO] 10.244.0.4:50854 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009051191s
	[INFO] 10.244.0.4:34637 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000156712s
	[INFO] 10.244.0.4:33648 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081153s
	[INFO] 10.244.0.4:57465 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003251096s
	[INFO] 10.244.0.4:51433 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000118067s
	[INFO] 10.244.2.2:37621 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200056s
	[INFO] 10.244.2.2:41751 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001978554s
	[INFO] 10.244.2.2:33044 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001486731s
	[INFO] 10.244.2.2:43102 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00010457s
	[INFO] 10.244.2.2:36141 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000183057s
	[INFO] 10.244.1.2:35260 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014156s
	[INFO] 10.244.1.2:40737 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00207375s
	[INFO] 10.244.1.2:34377 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000109225s
	[INFO] 10.244.1.2:48194 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096468s
	[INFO] 10.244.1.2:53649 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000092891s
	[INFO] 10.244.0.4:39691 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000126403s
	[INFO] 10.244.0.4:59011 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094158s
	[INFO] 10.244.2.2:46754 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133215s
	[INFO] 10.244.1.2:44424 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000161779s
	[INFO] 10.244.1.2:36322 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010124s
	[INFO] 10.244.0.4:56787 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000305054s
	[INFO] 10.244.2.2:56511 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168323s
	[INFO] 10.244.2.2:35510 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000291052s
	[INFO] 10.244.2.2:56208 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000174753s
	[INFO] 10.244.1.2:41964 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000119677s
	
	
	==> describe nodes <==
	Name:               ha-450021
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-450021
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=ha-450021
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_14T13_55_00_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 13:54:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-450021
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 14:01:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 13:58:03 +0000   Mon, 14 Oct 2024 13:54:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 13:58:03 +0000   Mon, 14 Oct 2024 13:54:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 13:58:03 +0000   Mon, 14 Oct 2024 13:54:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 13:58:03 +0000   Mon, 14 Oct 2024 13:55:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.176
	  Hostname:    ha-450021
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0546a3427732401daacd4235ad46d465
	  System UUID:                0546a342-7732-401d-aacd-4235ad46d465
	  Boot ID:                    19dd080e-b9f2-467d-b5f2-41dbb07e1880
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-fkz82              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m50s
	  kube-system                 coredns-7c65d6cfc9-btfml             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m22s
	  kube-system                 coredns-7c65d6cfc9-h5s6h             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m22s
	  kube-system                 etcd-ha-450021                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m27s
	  kube-system                 kindnet-c2xkn                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m22s
	  kube-system                 kube-apiserver-ha-450021             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 kube-controller-manager-ha-450021    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 kube-proxy-dmbpv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m22s
	  kube-system                 kube-scheduler-ha-450021             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 kube-vip-ha-450021                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m27s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m21s                  kube-proxy       
	  Normal  NodeHasSufficientPID     6m34s (x7 over 6m34s)  kubelet          Node ha-450021 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m34s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m34s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m34s (x8 over 6m34s)  kubelet          Node ha-450021 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m34s (x8 over 6m34s)  kubelet          Node ha-450021 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m27s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m27s                  kubelet          Node ha-450021 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m27s                  kubelet          Node ha-450021 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m27s                  kubelet          Node ha-450021 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m23s                  node-controller  Node ha-450021 event: Registered Node ha-450021 in Controller
	  Normal  NodeReady                6m8s                   kubelet          Node ha-450021 status is now: NodeReady
	  Normal  RegisteredNode           5m21s                  node-controller  Node ha-450021 event: Registered Node ha-450021 in Controller
	  Normal  RegisteredNode           4m6s                   node-controller  Node ha-450021 event: Registered Node ha-450021 in Controller
	
	
	Name:               ha-450021-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-450021-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=ha-450021
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_14T13_55_59_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 13:55:56 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-450021-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 13:58:49 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 14 Oct 2024 13:57:58 +0000   Mon, 14 Oct 2024 13:59:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 14 Oct 2024 13:57:58 +0000   Mon, 14 Oct 2024 13:59:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 14 Oct 2024 13:57:58 +0000   Mon, 14 Oct 2024 13:59:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 14 Oct 2024 13:57:58 +0000   Mon, 14 Oct 2024 13:59:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.89
	  Hostname:    ha-450021-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a42e43dc14cb4b949c605bff9ac6e0d6
	  System UUID:                a42e43dc-14cb-4b94-9c60-5bff9ac6e0d6
	  Boot ID:                    479e9a18-0fa8-4366-8acf-af40a06156d6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nt6q5                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m50s
	  kube-system                 etcd-ha-450021-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m29s
	  kube-system                 kindnet-2ghzc                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m30s
	  kube-system                 kube-apiserver-ha-450021-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m29s
	  kube-system                 kube-controller-manager-ha-450021-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-proxy-v24tf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m30s
	  kube-system                 kube-scheduler-ha-450021-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m27s
	  kube-system                 kube-vip-ha-450021-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m25s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m30s (x8 over 5m31s)  kubelet          Node ha-450021-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m30s (x8 over 5m31s)  kubelet          Node ha-450021-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m30s (x7 over 5m31s)  kubelet          Node ha-450021-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m28s                  node-controller  Node ha-450021-m02 event: Registered Node ha-450021-m02 in Controller
	  Normal  RegisteredNode           5m21s                  node-controller  Node ha-450021-m02 event: Registered Node ha-450021-m02 in Controller
	  Normal  RegisteredNode           4m6s                   node-controller  Node ha-450021-m02 event: Registered Node ha-450021-m02 in Controller
	  Normal  NodeNotReady             116s                   node-controller  Node ha-450021-m02 status is now: NodeNotReady
	
	
	Name:               ha-450021-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-450021-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=ha-450021
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_14T13_57_14_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 13:57:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-450021-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 14:01:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 13:57:40 +0000   Mon, 14 Oct 2024 13:57:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 13:57:40 +0000   Mon, 14 Oct 2024 13:57:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 13:57:40 +0000   Mon, 14 Oct 2024 13:57:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 13:57:40 +0000   Mon, 14 Oct 2024 13:57:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.55
	  Hostname:    ha-450021-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 50171e2610d047279285af0bf8eead91
	  System UUID:                50171e26-10d0-4727-9285-af0bf8eead91
	  Boot ID:                    7b6afcf4-f39b-41c1-92d6-cc1e18f2f3ff
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-lrvnn                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m50s
	  kube-system                 etcd-ha-450021-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m14s
	  kube-system                 kindnet-7jwgx                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m16s
	  kube-system                 kube-apiserver-ha-450021-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 kube-controller-manager-ha-450021-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 kube-proxy-9tbfp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 kube-scheduler-ha-450021-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 kube-vip-ha-450021-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m9s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  4m16s (x8 over 4m16s)  kubelet          Node ha-450021-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m16s (x8 over 4m16s)  kubelet          Node ha-450021-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m16s (x7 over 4m16s)  kubelet          Node ha-450021-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-450021-m03 event: Registered Node ha-450021-m03 in Controller
	  Normal  RegisteredNode           4m11s                  node-controller  Node ha-450021-m03 event: Registered Node ha-450021-m03 in Controller
	  Normal  RegisteredNode           4m6s                   node-controller  Node ha-450021-m03 event: Registered Node ha-450021-m03 in Controller
	
	
	Name:               ha-450021-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-450021-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=ha-450021
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_14T13_58_15_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 13:58:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-450021-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 14:01:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 13:58:45 +0000   Mon, 14 Oct 2024 13:58:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 13:58:45 +0000   Mon, 14 Oct 2024 13:58:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 13:58:45 +0000   Mon, 14 Oct 2024 13:58:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 13:58:45 +0000   Mon, 14 Oct 2024 13:58:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.127
	  Hostname:    ha-450021-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c8da54fea409461c84c103e8552a3553
	  System UUID:                c8da54fe-a409-461c-84c1-03e8552a3553
	  Boot ID:                    ed9b9ad9-a71a-4814-ae07-6cc1c2775deb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-478bj       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m12s
	  kube-system                 kube-proxy-2mfnd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m7s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  3m12s (x2 over 3m13s)  kubelet          Node ha-450021-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m12s (x2 over 3m13s)  kubelet          Node ha-450021-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m12s (x2 over 3m13s)  kubelet          Node ha-450021-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m11s                  node-controller  Node ha-450021-m04 event: Registered Node ha-450021-m04 in Controller
	  Normal  RegisteredNode           3m11s                  node-controller  Node ha-450021-m04 event: Registered Node ha-450021-m04 in Controller
	  Normal  RegisteredNode           3m8s                   node-controller  Node ha-450021-m04 event: Registered Node ha-450021-m04 in Controller
	  Normal  NodeReady                2m54s                  kubelet          Node ha-450021-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct14 13:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050735] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040529] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.861908] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.617931] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.603277] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.339591] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.056090] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067047] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.182956] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.129853] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.268814] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +3.909642] systemd-fstab-generator[746]: Ignoring "noauto" option for root device
	[  +4.099441] systemd-fstab-generator[876]: Ignoring "noauto" option for root device
	[  +0.067805] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.555395] systemd-fstab-generator[1292]: Ignoring "noauto" option for root device
	[  +0.098328] kauditd_printk_skb: 79 callbacks suppressed
	[Oct14 13:55] kauditd_printk_skb: 18 callbacks suppressed
	[ +14.850947] kauditd_printk_skb: 41 callbacks suppressed
	[Oct14 13:56] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [6ebec97dfd405a7e2c8ad77d0255ca029054cfb1090eba8d4d3851bdb68213e1] <==
	{"level":"warn","ts":"2024-10-14T14:01:26.159875Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:26.221213Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:26.227461Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:26.228199Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:26.235251Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:26.247361Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:26.254793Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:26.260272Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:26.263880Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:26.267966Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:26.272092Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:26.279176Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:26.285218Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:26.291081Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:26.294659Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:26.303298Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:26.309233Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:26.315441Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:26.321274Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:26.324443Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:26.327453Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:26.330622Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:26.336308Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:26.344407Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:26.359959Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 14:01:26 up 7 min,  0 users,  load average: 0.20, 0.20, 0.10
	Linux ha-450021 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b15af89d835eebb58d825b5cdfdcbcfc064fe27d95caa6667adfb0e714974996] <==
	I1014 14:00:48.802211       1 main.go:323] Node ha-450021-m04 has CIDR [10.244.3.0/24] 
	I1014 14:00:58.792229       1 main.go:296] Handling node with IPs: map[192.168.39.89:{}]
	I1014 14:00:58.792335       1 main.go:323] Node ha-450021-m02 has CIDR [10.244.1.0/24] 
	I1014 14:00:58.792702       1 main.go:296] Handling node with IPs: map[192.168.39.55:{}]
	I1014 14:00:58.792738       1 main.go:323] Node ha-450021-m03 has CIDR [10.244.2.0/24] 
	I1014 14:00:58.792927       1 main.go:296] Handling node with IPs: map[192.168.39.127:{}]
	I1014 14:00:58.793022       1 main.go:323] Node ha-450021-m04 has CIDR [10.244.3.0/24] 
	I1014 14:00:58.793206       1 main.go:296] Handling node with IPs: map[192.168.39.176:{}]
	I1014 14:00:58.793233       1 main.go:300] handling current node
	I1014 14:01:08.792774       1 main.go:296] Handling node with IPs: map[192.168.39.127:{}]
	I1014 14:01:08.792894       1 main.go:323] Node ha-450021-m04 has CIDR [10.244.3.0/24] 
	I1014 14:01:08.793209       1 main.go:296] Handling node with IPs: map[192.168.39.176:{}]
	I1014 14:01:08.793270       1 main.go:300] handling current node
	I1014 14:01:08.793308       1 main.go:296] Handling node with IPs: map[192.168.39.89:{}]
	I1014 14:01:08.793385       1 main.go:323] Node ha-450021-m02 has CIDR [10.244.1.0/24] 
	I1014 14:01:08.793725       1 main.go:296] Handling node with IPs: map[192.168.39.55:{}]
	I1014 14:01:08.793788       1 main.go:323] Node ha-450021-m03 has CIDR [10.244.2.0/24] 
	I1014 14:01:18.792871       1 main.go:296] Handling node with IPs: map[192.168.39.176:{}]
	I1014 14:01:18.792903       1 main.go:300] handling current node
	I1014 14:01:18.792918       1 main.go:296] Handling node with IPs: map[192.168.39.89:{}]
	I1014 14:01:18.792922       1 main.go:323] Node ha-450021-m02 has CIDR [10.244.1.0/24] 
	I1014 14:01:18.793175       1 main.go:296] Handling node with IPs: map[192.168.39.55:{}]
	I1014 14:01:18.793264       1 main.go:323] Node ha-450021-m03 has CIDR [10.244.2.0/24] 
	I1014 14:01:18.793419       1 main.go:296] Handling node with IPs: map[192.168.39.127:{}]
	I1014 14:01:18.793492       1 main.go:323] Node ha-450021-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [942c179e591a9c0a8a1d869cfc5456dcbfb37c78056f256b241c51aab8936a3e] <==
	I1014 13:54:59.598140       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1014 13:54:59.663013       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1014 13:54:59.717856       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1014 13:55:03.816892       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1014 13:55:04.117644       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1014 13:55:56.847231       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1014 13:55:56.847740       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 10.384µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1014 13:55:56.849144       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1014 13:55:56.850518       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1014 13:55:56.851864       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.726003ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E1014 13:57:40.356093       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42006: use of closed network connection
	E1014 13:57:40.548948       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42022: use of closed network connection
	E1014 13:57:40.734061       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42040: use of closed network connection
	E1014 13:57:40.931904       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42056: use of closed network connection
	E1014 13:57:41.132089       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42064: use of closed network connection
	E1014 13:57:41.311104       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42080: use of closed network connection
	E1014 13:57:41.483753       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42086: use of closed network connection
	E1014 13:57:41.673306       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42104: use of closed network connection
	E1014 13:57:41.861924       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41084: use of closed network connection
	E1014 13:57:42.155414       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41118: use of closed network connection
	E1014 13:57:42.326032       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41138: use of closed network connection
	E1014 13:57:42.498111       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41150: use of closed network connection
	E1014 13:57:42.666091       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41168: use of closed network connection
	E1014 13:57:42.837965       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41180: use of closed network connection
	E1014 13:57:43.032348       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41204: use of closed network connection
	
	
	==> kube-controller-manager [09fbfff3b334bde93db2f81855492434f8be70767826f2e33734ab52ad522a7a] <==
	I1014 13:58:14.814158       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:14.814232       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	E1014 13:58:14.983101       1 daemon_controller.go:329] "Unhandled Error" err="kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kindnet\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"131c0255-c34c-4638-a6ae-c00d282c1fc8\", ResourceVersion:\"944\", Generation:1, CreationTimestamp:time.Date(2024, time.October, 14, 13, 55, 0, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\", \"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"},\\\"name\\\":\\\"kindnet\\\"
,\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"app\\\":\\\"kindnet\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"env\\\":[{\\\"name\\\":\\\"HOST_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.hostIP\\\"}}},{\\\"name\\\":\\\"POD_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.podIP\\\"}}},{\\\"name\\\":\\\"POD_SUBNET\\\",\\\"value\\\":\\\"10.244.0.0/16\\\"}],\\\"image\\\":\\\"docker.io/kindest/kindnetd:v20241007-36f62932\\\",\\\"name\\\":\\\"kindnet-cni\\\",\\\"resources\\\":{\\\"limits\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"capabilities\\\":{\\\"add\\\":[\\\"NET_RAW\\\",\\\"NET_ADMIN\\\"]},\\\"privileged\\\":false},\\\"volumeMounts\\\":[{\\\"mountPath\\\
":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"cni-cfg\\\"},{\\\"mountPath\\\":\\\"/run/xtables.lock\\\",\\\"name\\\":\\\"xtables-lock\\\",\\\"readOnly\\\":false},{\\\"mountPath\\\":\\\"/lib/modules\\\",\\\"name\\\":\\\"lib-modules\\\",\\\"readOnly\\\":true}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"kindnet\\\",\\\"tolerations\\\":[{\\\"effect\\\":\\\"NoSchedule\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/etc/cni/net.d\\\",\\\"type\\\":\\\"DirectoryOrCreate\\\"},\\\"name\\\":\\\"cni-cfg\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/run/xtables.lock\\\",\\\"type\\\":\\\"FileOrCreate\\\"},\\\"name\\\":\\\"xtables-lock\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/lib/modules\\\"},\\\"name\\\":\\\"lib-modules\\\"}]}}}}\\n\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000d57240), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"
\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"cni-cfg\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00075b248), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeC
laimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00075b260), EmptyDir:(*v1.EmptyDirVolumeSource)(
nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxV
olumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00075b278), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), Azu
reFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kindnet-cni\", Image:\"docker.io/kindest/kindnetd:v20241007-36f62932\", Command:[]string(nil), Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"HOST_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc000d57280)}, v1.EnvVar{Name:\"POD_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSo
urce)(0xc000d57300)}, v1.EnvVar{Name:\"POD_SUBNET\", Value:\"10.244.0.0/16\", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Requests:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"cni-cfg\", ReadOnly:false
, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/etc/cni/net.d\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc001b502a0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralConta
iner(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc001820428), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string(nil), ServiceAccountName:\"kindnet\", DeprecatedServiceAccount:\"kindnet\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001d51480), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"NoSchedule\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Ov
erhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001e15e60)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001820470)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:3, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:3, ObservedGeneration:1, UpdatedNumberScheduled:3, NumberAvailable:3, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kindnet\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1014 13:58:14.983373       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:15.178688       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:15.243657       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:15.340286       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:15.399942       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:18.263248       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:18.263850       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-450021-m04"
	I1014 13:58:18.322338       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:24.991672       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:32.758209       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:32.758699       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-450021-m04"
	I1014 13:58:32.779681       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:33.281205       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:45.471689       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:59:30.147306       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-450021-m04"
	I1014 13:59:30.148143       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m02"
	I1014 13:59:30.170693       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m02"
	I1014 13:59:30.349046       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="34.558914ms"
	I1014 13:59:30.349473       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="165.118µs"
	I1014 13:59:33.404625       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m02"
	I1014 13:59:35.409214       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m02"
	
	
	==> kube-proxy [5eec863af38c114b5058f678da27f8ce8608a5cd97566d4e704e07ff87100124] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1014 13:55:05.027976       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1014 13:55:05.042612       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.176"]
	E1014 13:55:05.042701       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 13:55:05.077520       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1014 13:55:05.077626       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 13:55:05.077653       1 server_linux.go:169] "Using iptables Proxier"
	I1014 13:55:05.080947       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 13:55:05.081416       1 server.go:483] "Version info" version="v1.31.1"
	I1014 13:55:05.081449       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 13:55:05.084048       1 config.go:199] "Starting service config controller"
	I1014 13:55:05.084244       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 13:55:05.084407       1 config.go:105] "Starting endpoint slice config controller"
	I1014 13:55:05.084429       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 13:55:05.085497       1 config.go:328] "Starting node config controller"
	I1014 13:55:05.085525       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 13:55:05.185149       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1014 13:55:05.185195       1 shared_informer.go:320] Caches are synced for service config
	I1014 13:55:05.185638       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4efae268f9ec331abbf180a9264d60144b2a22485b89d39a46207f1c40454221] <==
	W1014 13:54:57.431755       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1014 13:54:57.431801       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 13:54:57.619315       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1014 13:54:57.619367       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 13:54:57.631913       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1014 13:54:57.632033       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1014 13:54:57.666200       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1014 13:54:57.666268       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 13:54:57.675854       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1014 13:54:57.675918       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 13:54:57.682854       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1014 13:54:57.683283       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1014 13:54:57.820025       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1014 13:54:57.820087       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 13:55:00.246826       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1014 13:57:36.278433       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-fkz82\": pod busybox-7dff88458-fkz82 is already assigned to node \"ha-450021\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-fkz82" node="ha-450021"
	E1014 13:57:36.278688       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 07dccd61-4a5a-4d82-ba70-df7e6ff6bb4c(default/busybox-7dff88458-fkz82) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-fkz82"
	E1014 13:57:36.278737       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-fkz82\": pod busybox-7dff88458-fkz82 is already assigned to node \"ha-450021\"" pod="default/busybox-7dff88458-fkz82"
	I1014 13:57:36.278788       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-fkz82" node="ha-450021"
	E1014 13:57:36.279144       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lrvnn\": pod busybox-7dff88458-lrvnn is already assigned to node \"ha-450021-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-lrvnn" node="ha-450021-m03"
	E1014 13:57:36.279201       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c0e6c9da-2bbd-4814-9310-ab74d5a3e09d(default/busybox-7dff88458-lrvnn) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-lrvnn"
	E1014 13:57:36.279240       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lrvnn\": pod busybox-7dff88458-lrvnn is already assigned to node \"ha-450021-m03\"" pod="default/busybox-7dff88458-lrvnn"
	I1014 13:57:36.279273       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-lrvnn" node="ha-450021-m03"
	E1014 13:58:14.867309       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-2mfnd\": pod kube-proxy-2mfnd is already assigned to node \"ha-450021-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-2mfnd" node="ha-450021-m04"
	E1014 13:58:14.867404       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-2mfnd\": pod kube-proxy-2mfnd is already assigned to node \"ha-450021-m04\"" pod="kube-system/kube-proxy-2mfnd"
	
	
	==> kubelet <==
	Oct 14 13:59:59 ha-450021 kubelet[1299]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 13:59:59 ha-450021 kubelet[1299]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 13:59:59 ha-450021 kubelet[1299]: E1014 13:59:59.850190    1299 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914399849941739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:59:59 ha-450021 kubelet[1299]: E1014 13:59:59.850218    1299 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914399849941739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:09 ha-450021 kubelet[1299]: E1014 14:00:09.852474    1299 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914409852112835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:09 ha-450021 kubelet[1299]: E1014 14:00:09.852527    1299 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914409852112835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:19 ha-450021 kubelet[1299]: E1014 14:00:19.856761    1299 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914419856453814,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:19 ha-450021 kubelet[1299]: E1014 14:00:19.856806    1299 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914419856453814,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:29 ha-450021 kubelet[1299]: E1014 14:00:29.858206    1299 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914429857922237,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:29 ha-450021 kubelet[1299]: E1014 14:00:29.858470    1299 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914429857922237,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:39 ha-450021 kubelet[1299]: E1014 14:00:39.861764    1299 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914439861102356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:39 ha-450021 kubelet[1299]: E1014 14:00:39.861870    1299 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914439861102356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:49 ha-450021 kubelet[1299]: E1014 14:00:49.864513    1299 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914449864091872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:49 ha-450021 kubelet[1299]: E1014 14:00:49.864550    1299 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914449864091872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:59 ha-450021 kubelet[1299]: E1014 14:00:59.724357    1299 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:00:59 ha-450021 kubelet[1299]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:00:59 ha-450021 kubelet[1299]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:00:59 ha-450021 kubelet[1299]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:00:59 ha-450021 kubelet[1299]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 14:00:59 ha-450021 kubelet[1299]: E1014 14:00:59.866616    1299 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914459866140857,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:59 ha-450021 kubelet[1299]: E1014 14:00:59.866661    1299 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914459866140857,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:01:09 ha-450021 kubelet[1299]: E1014 14:01:09.869535    1299 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914469868732835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:01:09 ha-450021 kubelet[1299]: E1014 14:01:09.869642    1299 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914469868732835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:01:19 ha-450021 kubelet[1299]: E1014 14:01:19.870997    1299 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914479870763162,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:01:19 ha-450021 kubelet[1299]: E1014 14:01:19.871040    1299 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914479870763162,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-450021 -n ha-450021
helpers_test.go:261: (dbg) Run:  kubectl --context ha-450021 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (4.139210105s)
ha_test.go:309: expected profile "ha-450021" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-450021\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-450021\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\
"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-450021\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.176\",\"Port\":8443,\"Kuberne
tesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.89\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.55\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.127\",\"Port\":0,\"KubernetesVersion\":\"v1.31.1\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\"
:false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"
MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-450021 -n ha-450021
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-450021 logs -n 25: (1.382929518s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-450021 cp ha-450021-m03:/home/docker/cp-test.txt                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021:/home/docker/cp-test_ha-450021-m03_ha-450021.txt                       |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n ha-450021 sudo cat                                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /home/docker/cp-test_ha-450021-m03_ha-450021.txt                                 |           |         |         |                     |                     |
	| cp      | ha-450021 cp ha-450021-m03:/home/docker/cp-test.txt                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m02:/home/docker/cp-test_ha-450021-m03_ha-450021-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n ha-450021-m02 sudo cat                                          | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /home/docker/cp-test_ha-450021-m03_ha-450021-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-450021 cp ha-450021-m03:/home/docker/cp-test.txt                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04:/home/docker/cp-test_ha-450021-m03_ha-450021-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n ha-450021-m04 sudo cat                                          | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /home/docker/cp-test_ha-450021-m03_ha-450021-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-450021 cp testdata/cp-test.txt                                                | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-450021 cp ha-450021-m04:/home/docker/cp-test.txt                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3029314565/001/cp-test_ha-450021-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-450021 cp ha-450021-m04:/home/docker/cp-test.txt                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021:/home/docker/cp-test_ha-450021-m04_ha-450021.txt                       |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n ha-450021 sudo cat                                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /home/docker/cp-test_ha-450021-m04_ha-450021.txt                                 |           |         |         |                     |                     |
	| cp      | ha-450021 cp ha-450021-m04:/home/docker/cp-test.txt                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m02:/home/docker/cp-test_ha-450021-m04_ha-450021-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n ha-450021-m02 sudo cat                                          | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /home/docker/cp-test_ha-450021-m04_ha-450021-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-450021 cp ha-450021-m04:/home/docker/cp-test.txt                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m03:/home/docker/cp-test_ha-450021-m04_ha-450021-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n ha-450021-m03 sudo cat                                          | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /home/docker/cp-test_ha-450021-m04_ha-450021-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-450021 node stop m02 -v=7                                                     | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-450021 node start m02 -v=7                                                    | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 14:01 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 13:54:19
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 13:54:19.812271   25306 out.go:345] Setting OutFile to fd 1 ...
	I1014 13:54:19.812610   25306 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:54:19.812625   25306 out.go:358] Setting ErrFile to fd 2...
	I1014 13:54:19.812632   25306 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:54:19.813049   25306 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
	I1014 13:54:19.813610   25306 out.go:352] Setting JSON to false
	I1014 13:54:19.814483   25306 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2210,"bootTime":1728911850,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 13:54:19.814571   25306 start.go:139] virtualization: kvm guest
	I1014 13:54:19.816884   25306 out.go:177] * [ha-450021] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1014 13:54:19.818710   25306 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 13:54:19.818708   25306 notify.go:220] Checking for updates...
	I1014 13:54:19.821425   25306 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 13:54:19.822777   25306 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 13:54:19.824007   25306 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 13:54:19.825232   25306 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 13:54:19.826443   25306 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 13:54:19.827738   25306 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 13:54:19.861394   25306 out.go:177] * Using the kvm2 driver based on user configuration
	I1014 13:54:19.862707   25306 start.go:297] selected driver: kvm2
	I1014 13:54:19.862720   25306 start.go:901] validating driver "kvm2" against <nil>
	I1014 13:54:19.862734   25306 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 13:54:19.863393   25306 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 13:54:19.863486   25306 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19790-7836/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 13:54:19.878143   25306 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1014 13:54:19.878185   25306 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 13:54:19.878407   25306 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 13:54:19.878437   25306 cni.go:84] Creating CNI manager for ""
	I1014 13:54:19.878478   25306 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1014 13:54:19.878486   25306 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 13:54:19.878530   25306 start.go:340] cluster config:
	{Name:ha-450021 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-450021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1014 13:54:19.878657   25306 iso.go:125] acquiring lock: {Name:mk2e2e780b05ead4007a93e6b56d28c6081926bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 13:54:19.881226   25306 out.go:177] * Starting "ha-450021" primary control-plane node in "ha-450021" cluster
	I1014 13:54:19.882326   25306 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 13:54:19.882357   25306 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1014 13:54:19.882366   25306 cache.go:56] Caching tarball of preloaded images
	I1014 13:54:19.882441   25306 preload.go:172] Found /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 13:54:19.882451   25306 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1014 13:54:19.882789   25306 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/config.json ...
	I1014 13:54:19.882811   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/config.json: {Name:mk7e7a81dd8e8c0d913c7421cc0d458f1e8a36b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:54:19.882936   25306 start.go:360] acquireMachinesLock for ha-450021: {Name:mk0b928e3a7c606d1470b2064477a22c5e59969d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 13:54:19.882963   25306 start.go:364] duration metric: took 16.489µs to acquireMachinesLock for "ha-450021"
	I1014 13:54:19.882982   25306 start.go:93] Provisioning new machine with config: &{Name:ha-450021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-450021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 13:54:19.883029   25306 start.go:125] createHost starting for "" (driver="kvm2")
	I1014 13:54:19.884643   25306 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 13:54:19.884761   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:54:19.884802   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:54:19.899595   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35743
	I1014 13:54:19.900085   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:54:19.900603   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:54:19.900622   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:54:19.900928   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:54:19.901089   25306 main.go:141] libmachine: (ha-450021) Calling .GetMachineName
	I1014 13:54:19.901224   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:54:19.901350   25306 start.go:159] libmachine.API.Create for "ha-450021" (driver="kvm2")
	I1014 13:54:19.901382   25306 client.go:168] LocalClient.Create starting
	I1014 13:54:19.901414   25306 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem
	I1014 13:54:19.901441   25306 main.go:141] libmachine: Decoding PEM data...
	I1014 13:54:19.901454   25306 main.go:141] libmachine: Parsing certificate...
	I1014 13:54:19.901498   25306 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem
	I1014 13:54:19.901515   25306 main.go:141] libmachine: Decoding PEM data...
	I1014 13:54:19.901544   25306 main.go:141] libmachine: Parsing certificate...
	I1014 13:54:19.901570   25306 main.go:141] libmachine: Running pre-create checks...
	I1014 13:54:19.901582   25306 main.go:141] libmachine: (ha-450021) Calling .PreCreateCheck
	I1014 13:54:19.901916   25306 main.go:141] libmachine: (ha-450021) Calling .GetConfigRaw
	I1014 13:54:19.902252   25306 main.go:141] libmachine: Creating machine...
	I1014 13:54:19.902264   25306 main.go:141] libmachine: (ha-450021) Calling .Create
	I1014 13:54:19.902384   25306 main.go:141] libmachine: (ha-450021) Creating KVM machine...
	I1014 13:54:19.903685   25306 main.go:141] libmachine: (ha-450021) DBG | found existing default KVM network
	I1014 13:54:19.904369   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:19.904236   25330 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002211e0}
	I1014 13:54:19.904404   25306 main.go:141] libmachine: (ha-450021) DBG | created network xml: 
	I1014 13:54:19.904424   25306 main.go:141] libmachine: (ha-450021) DBG | <network>
	I1014 13:54:19.904433   25306 main.go:141] libmachine: (ha-450021) DBG |   <name>mk-ha-450021</name>
	I1014 13:54:19.904439   25306 main.go:141] libmachine: (ha-450021) DBG |   <dns enable='no'/>
	I1014 13:54:19.904447   25306 main.go:141] libmachine: (ha-450021) DBG |   
	I1014 13:54:19.904459   25306 main.go:141] libmachine: (ha-450021) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1014 13:54:19.904466   25306 main.go:141] libmachine: (ha-450021) DBG |     <dhcp>
	I1014 13:54:19.904474   25306 main.go:141] libmachine: (ha-450021) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1014 13:54:19.904486   25306 main.go:141] libmachine: (ha-450021) DBG |     </dhcp>
	I1014 13:54:19.904496   25306 main.go:141] libmachine: (ha-450021) DBG |   </ip>
	I1014 13:54:19.904507   25306 main.go:141] libmachine: (ha-450021) DBG |   
	I1014 13:54:19.904513   25306 main.go:141] libmachine: (ha-450021) DBG | </network>
	I1014 13:54:19.904522   25306 main.go:141] libmachine: (ha-450021) DBG | 
	I1014 13:54:19.910040   25306 main.go:141] libmachine: (ha-450021) DBG | trying to create private KVM network mk-ha-450021 192.168.39.0/24...
	I1014 13:54:19.971833   25306 main.go:141] libmachine: (ha-450021) DBG | private KVM network mk-ha-450021 192.168.39.0/24 created
	I1014 13:54:19.971862   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:19.971805   25330 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 13:54:19.971874   25306 main.go:141] libmachine: (ha-450021) Setting up store path in /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021 ...
	I1014 13:54:19.971891   25306 main.go:141] libmachine: (ha-450021) Building disk image from file:///home/jenkins/minikube-integration/19790-7836/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1014 13:54:19.971967   25306 main.go:141] libmachine: (ha-450021) Downloading /home/jenkins/minikube-integration/19790-7836/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19790-7836/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1014 13:54:20.214152   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:20.214048   25330 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa...
	I1014 13:54:20.270347   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:20.270208   25330 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/ha-450021.rawdisk...
	I1014 13:54:20.270384   25306 main.go:141] libmachine: (ha-450021) DBG | Writing magic tar header
	I1014 13:54:20.270399   25306 main.go:141] libmachine: (ha-450021) DBG | Writing SSH key tar header
	I1014 13:54:20.270411   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:20.270359   25330 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021 ...
	I1014 13:54:20.270469   25306 main.go:141] libmachine: (ha-450021) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021
	I1014 13:54:20.270577   25306 main.go:141] libmachine: (ha-450021) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021 (perms=drwx------)
	I1014 13:54:20.270629   25306 main.go:141] libmachine: (ha-450021) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube/machines (perms=drwxr-xr-x)
	I1014 13:54:20.270649   25306 main.go:141] libmachine: (ha-450021) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube/machines
	I1014 13:54:20.270663   25306 main.go:141] libmachine: (ha-450021) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 13:54:20.270676   25306 main.go:141] libmachine: (ha-450021) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube (perms=drwxr-xr-x)
	I1014 13:54:20.270690   25306 main.go:141] libmachine: (ha-450021) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836 (perms=drwxrwxr-x)
	I1014 13:54:20.270697   25306 main.go:141] libmachine: (ha-450021) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1014 13:54:20.270707   25306 main.go:141] libmachine: (ha-450021) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1014 13:54:20.270716   25306 main.go:141] libmachine: (ha-450021) Creating domain...
	I1014 13:54:20.270725   25306 main.go:141] libmachine: (ha-450021) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836
	I1014 13:54:20.270732   25306 main.go:141] libmachine: (ha-450021) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1014 13:54:20.270758   25306 main.go:141] libmachine: (ha-450021) DBG | Checking permissions on dir: /home/jenkins
	I1014 13:54:20.270778   25306 main.go:141] libmachine: (ha-450021) DBG | Checking permissions on dir: /home
	I1014 13:54:20.270791   25306 main.go:141] libmachine: (ha-450021) DBG | Skipping /home - not owner
	I1014 13:54:20.271873   25306 main.go:141] libmachine: (ha-450021) define libvirt domain using xml: 
	I1014 13:54:20.271895   25306 main.go:141] libmachine: (ha-450021) <domain type='kvm'>
	I1014 13:54:20.271904   25306 main.go:141] libmachine: (ha-450021)   <name>ha-450021</name>
	I1014 13:54:20.271909   25306 main.go:141] libmachine: (ha-450021)   <memory unit='MiB'>2200</memory>
	I1014 13:54:20.271915   25306 main.go:141] libmachine: (ha-450021)   <vcpu>2</vcpu>
	I1014 13:54:20.271922   25306 main.go:141] libmachine: (ha-450021)   <features>
	I1014 13:54:20.271942   25306 main.go:141] libmachine: (ha-450021)     <acpi/>
	I1014 13:54:20.271950   25306 main.go:141] libmachine: (ha-450021)     <apic/>
	I1014 13:54:20.271956   25306 main.go:141] libmachine: (ha-450021)     <pae/>
	I1014 13:54:20.271997   25306 main.go:141] libmachine: (ha-450021)     
	I1014 13:54:20.272026   25306 main.go:141] libmachine: (ha-450021)   </features>
	I1014 13:54:20.272048   25306 main.go:141] libmachine: (ha-450021)   <cpu mode='host-passthrough'>
	I1014 13:54:20.272058   25306 main.go:141] libmachine: (ha-450021)   
	I1014 13:54:20.272070   25306 main.go:141] libmachine: (ha-450021)   </cpu>
	I1014 13:54:20.272081   25306 main.go:141] libmachine: (ha-450021)   <os>
	I1014 13:54:20.272089   25306 main.go:141] libmachine: (ha-450021)     <type>hvm</type>
	I1014 13:54:20.272100   25306 main.go:141] libmachine: (ha-450021)     <boot dev='cdrom'/>
	I1014 13:54:20.272132   25306 main.go:141] libmachine: (ha-450021)     <boot dev='hd'/>
	I1014 13:54:20.272144   25306 main.go:141] libmachine: (ha-450021)     <bootmenu enable='no'/>
	I1014 13:54:20.272150   25306 main.go:141] libmachine: (ha-450021)   </os>
	I1014 13:54:20.272158   25306 main.go:141] libmachine: (ha-450021)   <devices>
	I1014 13:54:20.272173   25306 main.go:141] libmachine: (ha-450021)     <disk type='file' device='cdrom'>
	I1014 13:54:20.272188   25306 main.go:141] libmachine: (ha-450021)       <source file='/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/boot2docker.iso'/>
	I1014 13:54:20.272198   25306 main.go:141] libmachine: (ha-450021)       <target dev='hdc' bus='scsi'/>
	I1014 13:54:20.272208   25306 main.go:141] libmachine: (ha-450021)       <readonly/>
	I1014 13:54:20.272217   25306 main.go:141] libmachine: (ha-450021)     </disk>
	I1014 13:54:20.272224   25306 main.go:141] libmachine: (ha-450021)     <disk type='file' device='disk'>
	I1014 13:54:20.272233   25306 main.go:141] libmachine: (ha-450021)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1014 13:54:20.272252   25306 main.go:141] libmachine: (ha-450021)       <source file='/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/ha-450021.rawdisk'/>
	I1014 13:54:20.272267   25306 main.go:141] libmachine: (ha-450021)       <target dev='hda' bus='virtio'/>
	I1014 13:54:20.272277   25306 main.go:141] libmachine: (ha-450021)     </disk>
	I1014 13:54:20.272287   25306 main.go:141] libmachine: (ha-450021)     <interface type='network'>
	I1014 13:54:20.272303   25306 main.go:141] libmachine: (ha-450021)       <source network='mk-ha-450021'/>
	I1014 13:54:20.272315   25306 main.go:141] libmachine: (ha-450021)       <model type='virtio'/>
	I1014 13:54:20.272323   25306 main.go:141] libmachine: (ha-450021)     </interface>
	I1014 13:54:20.272332   25306 main.go:141] libmachine: (ha-450021)     <interface type='network'>
	I1014 13:54:20.272356   25306 main.go:141] libmachine: (ha-450021)       <source network='default'/>
	I1014 13:54:20.272378   25306 main.go:141] libmachine: (ha-450021)       <model type='virtio'/>
	I1014 13:54:20.272390   25306 main.go:141] libmachine: (ha-450021)     </interface>
	I1014 13:54:20.272397   25306 main.go:141] libmachine: (ha-450021)     <serial type='pty'>
	I1014 13:54:20.272402   25306 main.go:141] libmachine: (ha-450021)       <target port='0'/>
	I1014 13:54:20.272409   25306 main.go:141] libmachine: (ha-450021)     </serial>
	I1014 13:54:20.272414   25306 main.go:141] libmachine: (ha-450021)     <console type='pty'>
	I1014 13:54:20.272421   25306 main.go:141] libmachine: (ha-450021)       <target type='serial' port='0'/>
	I1014 13:54:20.272426   25306 main.go:141] libmachine: (ha-450021)     </console>
	I1014 13:54:20.272433   25306 main.go:141] libmachine: (ha-450021)     <rng model='virtio'>
	I1014 13:54:20.272442   25306 main.go:141] libmachine: (ha-450021)       <backend model='random'>/dev/random</backend>
	I1014 13:54:20.272449   25306 main.go:141] libmachine: (ha-450021)     </rng>
	I1014 13:54:20.272464   25306 main.go:141] libmachine: (ha-450021)     
	I1014 13:54:20.272479   25306 main.go:141] libmachine: (ha-450021)     
	I1014 13:54:20.272490   25306 main.go:141] libmachine: (ha-450021)   </devices>
	I1014 13:54:20.272499   25306 main.go:141] libmachine: (ha-450021) </domain>
	I1014 13:54:20.272508   25306 main.go:141] libmachine: (ha-450021) 
	I1014 13:54:20.276743   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:57:d6:54 in network default
	I1014 13:54:20.277233   25306 main.go:141] libmachine: (ha-450021) Ensuring networks are active...
	I1014 13:54:20.277256   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:20.277849   25306 main.go:141] libmachine: (ha-450021) Ensuring network default is active
	I1014 13:54:20.278100   25306 main.go:141] libmachine: (ha-450021) Ensuring network mk-ha-450021 is active
	I1014 13:54:20.278557   25306 main.go:141] libmachine: (ha-450021) Getting domain xml...
	I1014 13:54:20.279179   25306 main.go:141] libmachine: (ha-450021) Creating domain...
	I1014 13:54:21.462335   25306 main.go:141] libmachine: (ha-450021) Waiting to get IP...
	I1014 13:54:21.463069   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:21.463429   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:21.463469   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:21.463416   25330 retry.go:31] will retry after 252.896893ms: waiting for machine to come up
	I1014 13:54:21.717838   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:21.718276   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:21.718307   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:21.718253   25330 retry.go:31] will retry after 323.417298ms: waiting for machine to come up
	I1014 13:54:22.043653   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:22.044089   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:22.044113   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:22.044049   25330 retry.go:31] will retry after 429.247039ms: waiting for machine to come up
	I1014 13:54:22.474550   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:22.475007   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:22.475032   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:22.474972   25330 retry.go:31] will retry after 584.602082ms: waiting for machine to come up
	I1014 13:54:23.060636   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:23.061070   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:23.061096   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:23.061025   25330 retry.go:31] will retry after 757.618183ms: waiting for machine to come up
	I1014 13:54:23.819839   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:23.820349   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:23.820388   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:23.820305   25330 retry.go:31] will retry after 770.363721ms: waiting for machine to come up
	I1014 13:54:24.592151   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:24.592528   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:24.592563   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:24.592475   25330 retry.go:31] will retry after 746.543201ms: waiting for machine to come up
	I1014 13:54:25.340318   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:25.340826   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:25.340855   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:25.340782   25330 retry.go:31] will retry after 1.064448623s: waiting for machine to come up
	I1014 13:54:26.407039   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:26.407396   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:26.407443   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:26.407341   25330 retry.go:31] will retry after 1.702825811s: waiting for machine to come up
	I1014 13:54:28.112412   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:28.112812   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:28.112833   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:28.112771   25330 retry.go:31] will retry after 2.323768802s: waiting for machine to come up
	I1014 13:54:30.438077   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:30.438423   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:30.438463   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:30.438389   25330 retry.go:31] will retry after 2.882558658s: waiting for machine to come up
	I1014 13:54:33.324506   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:33.324987   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:33.325011   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:33.324949   25330 retry.go:31] will retry after 3.489582892s: waiting for machine to come up
	I1014 13:54:36.817112   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:36.817504   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find current IP address of domain ha-450021 in network mk-ha-450021
	I1014 13:54:36.817523   25306 main.go:141] libmachine: (ha-450021) DBG | I1014 13:54:36.817476   25330 retry.go:31] will retry after 4.118141928s: waiting for machine to come up
	I1014 13:54:40.937526   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:40.938020   25306 main.go:141] libmachine: (ha-450021) Found IP for machine: 192.168.39.176
	I1014 13:54:40.938039   25306 main.go:141] libmachine: (ha-450021) Reserving static IP address...
	I1014 13:54:40.938070   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has current primary IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:40.938454   25306 main.go:141] libmachine: (ha-450021) DBG | unable to find host DHCP lease matching {name: "ha-450021", mac: "52:54:00:a1:20:5f", ip: "192.168.39.176"} in network mk-ha-450021
	I1014 13:54:41.006419   25306 main.go:141] libmachine: (ha-450021) DBG | Getting to WaitForSSH function...
	I1014 13:54:41.006450   25306 main.go:141] libmachine: (ha-450021) Reserved static IP address: 192.168.39.176
	I1014 13:54:41.006463   25306 main.go:141] libmachine: (ha-450021) Waiting for SSH to be available...
	I1014 13:54:41.008964   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.009322   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:41.009350   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.009443   25306 main.go:141] libmachine: (ha-450021) DBG | Using SSH client type: external
	I1014 13:54:41.009470   25306 main.go:141] libmachine: (ha-450021) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa (-rw-------)
	I1014 13:54:41.009582   25306 main.go:141] libmachine: (ha-450021) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.176 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 13:54:41.009598   25306 main.go:141] libmachine: (ha-450021) DBG | About to run SSH command:
	I1014 13:54:41.009610   25306 main.go:141] libmachine: (ha-450021) DBG | exit 0
	I1014 13:54:41.138539   25306 main.go:141] libmachine: (ha-450021) DBG | SSH cmd err, output: <nil>: 
	I1014 13:54:41.138806   25306 main.go:141] libmachine: (ha-450021) KVM machine creation complete!
	I1014 13:54:41.139099   25306 main.go:141] libmachine: (ha-450021) Calling .GetConfigRaw
	I1014 13:54:41.139669   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:54:41.139826   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:54:41.139970   25306 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1014 13:54:41.139983   25306 main.go:141] libmachine: (ha-450021) Calling .GetState
	I1014 13:54:41.141211   25306 main.go:141] libmachine: Detecting operating system of created instance...
	I1014 13:54:41.141221   25306 main.go:141] libmachine: Waiting for SSH to be available...
	I1014 13:54:41.141226   25306 main.go:141] libmachine: Getting to WaitForSSH function...
	I1014 13:54:41.141232   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:41.143400   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.143673   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:41.143693   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.143898   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:54:41.144069   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:41.144217   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:41.144390   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:54:41.144570   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:54:41.144741   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1014 13:54:41.144750   25306 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1014 13:54:41.257764   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 13:54:41.257787   25306 main.go:141] libmachine: Detecting the provisioner...
	I1014 13:54:41.257794   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:41.260355   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.260721   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:41.260755   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.260886   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:54:41.261058   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:41.261185   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:41.261349   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:54:41.261568   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:54:41.261770   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1014 13:54:41.261781   25306 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1014 13:54:41.387334   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1014 13:54:41.387407   25306 main.go:141] libmachine: found compatible host: buildroot
	I1014 13:54:41.387415   25306 main.go:141] libmachine: Provisioning with buildroot...
	I1014 13:54:41.387428   25306 main.go:141] libmachine: (ha-450021) Calling .GetMachineName
	I1014 13:54:41.387694   25306 buildroot.go:166] provisioning hostname "ha-450021"
	I1014 13:54:41.387742   25306 main.go:141] libmachine: (ha-450021) Calling .GetMachineName
	I1014 13:54:41.387887   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:41.390287   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.390677   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:41.390702   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.390836   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:54:41.391004   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:41.391122   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:41.391234   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:54:41.391358   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:54:41.391508   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1014 13:54:41.391518   25306 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-450021 && echo "ha-450021" | sudo tee /etc/hostname
	I1014 13:54:41.517186   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-450021
	
	I1014 13:54:41.517216   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:41.520093   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.520451   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:41.520480   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.520651   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:54:41.520827   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:41.520970   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:41.521077   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:54:41.521209   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:54:41.521391   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1014 13:54:41.521405   25306 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-450021' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-450021/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-450021' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 13:54:41.643685   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 13:54:41.643709   25306 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 13:54:41.643742   25306 buildroot.go:174] setting up certificates
	I1014 13:54:41.643754   25306 provision.go:84] configureAuth start
	I1014 13:54:41.643778   25306 main.go:141] libmachine: (ha-450021) Calling .GetMachineName
	I1014 13:54:41.644050   25306 main.go:141] libmachine: (ha-450021) Calling .GetIP
	I1014 13:54:41.646478   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.646878   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:41.646897   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.647059   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:41.648912   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.649213   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:41.649236   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:41.649373   25306 provision.go:143] copyHostCerts
	I1014 13:54:41.649402   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 13:54:41.649434   25306 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 13:54:41.649453   25306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 13:54:41.649515   25306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 13:54:41.649594   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 13:54:41.649617   25306 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 13:54:41.649623   25306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 13:54:41.649649   25306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 13:54:41.649688   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 13:54:41.649704   25306 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 13:54:41.649710   25306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 13:54:41.649730   25306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 13:54:41.649772   25306 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.ha-450021 san=[127.0.0.1 192.168.39.176 ha-450021 localhost minikube]
	I1014 13:54:41.997744   25306 provision.go:177] copyRemoteCerts
	I1014 13:54:41.997799   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 13:54:41.997817   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:42.000612   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.000903   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:42.000935   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.001075   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:54:42.001266   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:42.001429   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:54:42.001565   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 13:54:42.088827   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 13:54:42.088897   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 13:54:42.116095   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 13:54:42.116160   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 13:54:42.142757   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 13:54:42.142813   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1014 13:54:42.169537   25306 provision.go:87] duration metric: took 525.766906ms to configureAuth
	I1014 13:54:42.169566   25306 buildroot.go:189] setting minikube options for container-runtime
	I1014 13:54:42.169754   25306 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:54:42.169842   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:42.173229   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.174055   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:42.174080   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.174242   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:54:42.174429   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:42.174574   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:42.174715   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:54:42.174880   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:54:42.175029   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1014 13:54:42.175043   25306 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 13:54:42.406341   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 13:54:42.406376   25306 main.go:141] libmachine: Checking connection to Docker...
	I1014 13:54:42.406388   25306 main.go:141] libmachine: (ha-450021) Calling .GetURL
	I1014 13:54:42.407812   25306 main.go:141] libmachine: (ha-450021) DBG | Using libvirt version 6000000
	I1014 13:54:42.409824   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.410126   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:42.410157   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.410300   25306 main.go:141] libmachine: Docker is up and running!
	I1014 13:54:42.410319   25306 main.go:141] libmachine: Reticulating splines...
	I1014 13:54:42.410327   25306 client.go:171] duration metric: took 22.508934376s to LocalClient.Create
	I1014 13:54:42.410349   25306 start.go:167] duration metric: took 22.50900119s to libmachine.API.Create "ha-450021"
	I1014 13:54:42.410361   25306 start.go:293] postStartSetup for "ha-450021" (driver="kvm2")
	I1014 13:54:42.410370   25306 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 13:54:42.410386   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:54:42.410579   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 13:54:42.410619   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:42.412494   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.412776   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:42.412801   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.412917   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:54:42.413098   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:42.413204   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:54:42.413344   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 13:54:42.501187   25306 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 13:54:42.505548   25306 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 13:54:42.505573   25306 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 13:54:42.505640   25306 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 13:54:42.505739   25306 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 13:54:42.505751   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> /etc/ssl/certs/150232.pem
	I1014 13:54:42.505871   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 13:54:42.515100   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 13:54:42.540037   25306 start.go:296] duration metric: took 129.664961ms for postStartSetup
	I1014 13:54:42.540090   25306 main.go:141] libmachine: (ha-450021) Calling .GetConfigRaw
	I1014 13:54:42.540652   25306 main.go:141] libmachine: (ha-450021) Calling .GetIP
	I1014 13:54:42.543542   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.543870   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:42.543893   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.544115   25306 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/config.json ...
	I1014 13:54:42.544316   25306 start.go:128] duration metric: took 22.661278968s to createHost
	I1014 13:54:42.544340   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:42.546241   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.546584   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:42.546619   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.546735   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:54:42.546887   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:42.547016   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:42.547115   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:54:42.547241   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:54:42.547400   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1014 13:54:42.547410   25306 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 13:54:42.659258   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728914082.633821014
	
	I1014 13:54:42.659276   25306 fix.go:216] guest clock: 1728914082.633821014
	I1014 13:54:42.659283   25306 fix.go:229] Guest: 2024-10-14 13:54:42.633821014 +0000 UTC Remote: 2024-10-14 13:54:42.544328107 +0000 UTC m=+22.768041164 (delta=89.492907ms)
	I1014 13:54:42.659308   25306 fix.go:200] guest clock delta is within tolerance: 89.492907ms
	I1014 13:54:42.659315   25306 start.go:83] releasing machines lock for "ha-450021", held for 22.776339529s
	I1014 13:54:42.659340   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:54:42.659634   25306 main.go:141] libmachine: (ha-450021) Calling .GetIP
	I1014 13:54:42.662263   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.662566   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:42.662590   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.662762   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:54:42.663245   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:54:42.663382   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:54:42.663435   25306 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 13:54:42.663485   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:42.663584   25306 ssh_runner.go:195] Run: cat /version.json
	I1014 13:54:42.663609   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:54:42.665952   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.666140   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.666285   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:42.666310   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.666455   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:42.666478   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:42.666495   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:54:42.666715   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:54:42.666742   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:42.666851   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:54:42.666858   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:54:42.667031   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:54:42.667026   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 13:54:42.667128   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 13:54:42.747369   25306 ssh_runner.go:195] Run: systemctl --version
	I1014 13:54:42.781149   25306 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 13:54:42.939239   25306 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 13:54:42.945827   25306 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 13:54:42.945908   25306 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 13:54:42.961868   25306 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 13:54:42.961898   25306 start.go:495] detecting cgroup driver to use...
	I1014 13:54:42.961965   25306 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 13:54:42.979523   25306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 13:54:42.994309   25306 docker.go:217] disabling cri-docker service (if available) ...
	I1014 13:54:42.994364   25306 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 13:54:43.009231   25306 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 13:54:43.023792   25306 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 13:54:43.139525   25306 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 13:54:43.303272   25306 docker.go:233] disabling docker service ...
	I1014 13:54:43.303333   25306 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 13:54:43.318132   25306 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 13:54:43.331650   25306 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 13:54:43.447799   25306 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 13:54:43.574532   25306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 13:54:43.588882   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 13:54:43.606788   25306 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 13:54:43.606849   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:54:43.617065   25306 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 13:54:43.617138   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:54:43.627421   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:54:43.637692   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:54:43.648944   25306 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 13:54:43.659223   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:54:43.669296   25306 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:54:43.686887   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:54:43.697925   25306 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 13:54:43.707402   25306 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 13:54:43.707476   25306 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 13:54:43.720091   25306 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 13:54:43.729667   25306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:54:43.845781   25306 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 13:54:43.932782   25306 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 13:54:43.932868   25306 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 13:54:43.938172   25306 start.go:563] Will wait 60s for crictl version
	I1014 13:54:43.938228   25306 ssh_runner.go:195] Run: which crictl
	I1014 13:54:43.941774   25306 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 13:54:43.979317   25306 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 13:54:43.979415   25306 ssh_runner.go:195] Run: crio --version
	I1014 13:54:44.006952   25306 ssh_runner.go:195] Run: crio --version
	I1014 13:54:44.038472   25306 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1014 13:54:44.039762   25306 main.go:141] libmachine: (ha-450021) Calling .GetIP
	I1014 13:54:44.042304   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:44.042634   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:54:44.042661   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:54:44.042831   25306 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1014 13:54:44.046611   25306 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 13:54:44.059369   25306 kubeadm.go:883] updating cluster {Name:ha-450021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 13:54:44.059491   25306 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 13:54:44.059551   25306 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 13:54:44.090998   25306 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1014 13:54:44.091053   25306 ssh_runner.go:195] Run: which lz4
	I1014 13:54:44.094706   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1014 13:54:44.094776   25306 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 13:54:44.098775   25306 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 13:54:44.098800   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1014 13:54:45.421436   25306 crio.go:462] duration metric: took 1.326676583s to copy over tarball
	I1014 13:54:45.421513   25306 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 13:54:47.393636   25306 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.97209405s)
	I1014 13:54:47.393677   25306 crio.go:469] duration metric: took 1.97220742s to extract the tarball
	I1014 13:54:47.393687   25306 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 13:54:47.430848   25306 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 13:54:47.475174   25306 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 13:54:47.475197   25306 cache_images.go:84] Images are preloaded, skipping loading
	I1014 13:54:47.475204   25306 kubeadm.go:934] updating node { 192.168.39.176 8443 v1.31.1 crio true true} ...
	I1014 13:54:47.475299   25306 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-450021 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.176
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 13:54:47.475375   25306 ssh_runner.go:195] Run: crio config
	I1014 13:54:47.520162   25306 cni.go:84] Creating CNI manager for ""
	I1014 13:54:47.520183   25306 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 13:54:47.520192   25306 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 13:54:47.520214   25306 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.176 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-450021 NodeName:ha-450021 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.176"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.176 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 13:54:47.520316   25306 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.176
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-450021"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.176"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.176"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 13:54:47.520338   25306 kube-vip.go:115] generating kube-vip config ...
	I1014 13:54:47.520375   25306 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1014 13:54:47.537448   25306 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1014 13:54:47.537535   25306 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1014 13:54:47.537577   25306 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 13:54:47.551104   25306 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 13:54:47.551176   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1014 13:54:47.562687   25306 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1014 13:54:47.578926   25306 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 13:54:47.594827   25306 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1014 13:54:47.610693   25306 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1014 13:54:47.626695   25306 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1014 13:54:47.630338   25306 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 13:54:47.642280   25306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:54:47.756050   25306 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 13:54:47.773461   25306 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021 for IP: 192.168.39.176
	I1014 13:54:47.773484   25306 certs.go:194] generating shared ca certs ...
	I1014 13:54:47.773503   25306 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:54:47.773705   25306 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 13:54:47.773829   25306 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 13:54:47.773848   25306 certs.go:256] generating profile certs ...
	I1014 13:54:47.773913   25306 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.key
	I1014 13:54:47.773930   25306 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.crt with IP's: []
	I1014 13:54:48.113501   25306 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.crt ...
	I1014 13:54:48.113531   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.crt: {Name:mkbf9820119866d476b6914d2148d200b676c657 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:54:48.113715   25306 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.key ...
	I1014 13:54:48.113731   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.key: {Name:mk7d74bdc4633efc50efa47cc87ab000404cd20c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:54:48.113831   25306 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.1083e180
	I1014 13:54:48.113850   25306 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.1083e180 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.176 192.168.39.254]
	I1014 13:54:48.267925   25306 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.1083e180 ...
	I1014 13:54:48.267957   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.1083e180: {Name:mkd19ba2c223d25d9a0673db3befa3152f7a2c70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:54:48.268143   25306 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.1083e180 ...
	I1014 13:54:48.268160   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.1083e180: {Name:mkd725fc60a32f585bc691d5e3dd373c3c488835 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:54:48.268262   25306 certs.go:381] copying /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.1083e180 -> /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt
	I1014 13:54:48.268370   25306 certs.go:385] copying /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.1083e180 -> /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key
	I1014 13:54:48.268460   25306 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key
	I1014 13:54:48.268481   25306 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.crt with IP's: []
	I1014 13:54:48.434515   25306 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.crt ...
	I1014 13:54:48.434539   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.crt: {Name:mk37070511c0eff0f5c442e93060bbaddee85673 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:54:48.434689   25306 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key ...
	I1014 13:54:48.434700   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key: {Name:mk4252d17e842b88b135b952004ba8203bf67100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:54:48.434774   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 13:54:48.434791   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 13:54:48.434801   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 13:54:48.434813   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 13:54:48.434823   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 13:54:48.434833   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 13:54:48.434843   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 13:54:48.434854   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 13:54:48.434895   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 13:54:48.434936   25306 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 13:54:48.434945   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 13:54:48.434969   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 13:54:48.434990   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 13:54:48.435010   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 13:54:48.435044   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 13:54:48.435072   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> /usr/share/ca-certificates/150232.pem
	I1014 13:54:48.435084   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:54:48.435096   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem -> /usr/share/ca-certificates/15023.pem
	I1014 13:54:48.436322   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 13:54:48.461913   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 13:54:48.484404   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 13:54:48.506815   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 13:54:48.532871   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 13:54:48.555023   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 13:54:48.577102   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 13:54:48.599841   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 13:54:48.622100   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 13:54:48.644244   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 13:54:48.666067   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 13:54:48.688272   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 13:54:48.704452   25306 ssh_runner.go:195] Run: openssl version
	I1014 13:54:48.709950   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 13:54:48.720462   25306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:54:48.724736   25306 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:54:48.724786   25306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:54:48.730515   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 13:54:48.740926   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 13:54:48.751163   25306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 13:54:48.755136   25306 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 13:54:48.755173   25306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 13:54:48.760601   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 13:54:48.771042   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 13:54:48.781517   25306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 13:54:48.785721   25306 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 13:54:48.785757   25306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 13:54:48.791039   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 13:54:48.801295   25306 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 13:54:48.805300   25306 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 13:54:48.805353   25306 kubeadm.go:392] StartCluster: {Name:ha-450021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 13:54:48.805425   25306 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 13:54:48.805474   25306 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 13:54:48.846958   25306 cri.go:89] found id: ""
	I1014 13:54:48.847017   25306 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 13:54:48.856997   25306 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 13:54:48.866515   25306 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 13:54:48.876223   25306 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 13:54:48.876241   25306 kubeadm.go:157] found existing configuration files:
	
	I1014 13:54:48.876288   25306 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 13:54:48.885144   25306 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 13:54:48.885195   25306 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 13:54:48.894355   25306 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 13:54:48.902957   25306 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 13:54:48.903009   25306 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 13:54:48.912153   25306 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 13:54:48.921701   25306 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 13:54:48.921759   25306 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 13:54:48.931128   25306 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 13:54:48.939839   25306 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 13:54:48.939871   25306 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 13:54:48.948948   25306 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 13:54:49.168356   25306 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 13:55:00.103864   25306 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1014 13:55:00.103941   25306 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 13:55:00.104029   25306 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 13:55:00.104143   25306 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 13:55:00.104280   25306 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 13:55:00.104375   25306 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 13:55:00.106272   25306 out.go:235]   - Generating certificates and keys ...
	I1014 13:55:00.106362   25306 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 13:55:00.106429   25306 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 13:55:00.106511   25306 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 13:55:00.106612   25306 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1014 13:55:00.106709   25306 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1014 13:55:00.106793   25306 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1014 13:55:00.106864   25306 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1014 13:55:00.107022   25306 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-450021 localhost] and IPs [192.168.39.176 127.0.0.1 ::1]
	I1014 13:55:00.107089   25306 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1014 13:55:00.107238   25306 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-450021 localhost] and IPs [192.168.39.176 127.0.0.1 ::1]
	I1014 13:55:00.107331   25306 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 13:55:00.107416   25306 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 13:55:00.107496   25306 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1014 13:55:00.107576   25306 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 13:55:00.107656   25306 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 13:55:00.107736   25306 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 13:55:00.107811   25306 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 13:55:00.107905   25306 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 13:55:00.107957   25306 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 13:55:00.108061   25306 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 13:55:00.108162   25306 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 13:55:00.109922   25306 out.go:235]   - Booting up control plane ...
	I1014 13:55:00.110034   25306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 13:55:00.110132   25306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 13:55:00.110214   25306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 13:55:00.110345   25306 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 13:55:00.110449   25306 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 13:55:00.110494   25306 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 13:55:00.110622   25306 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 13:55:00.110705   25306 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 13:55:00.110755   25306 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002174478s
	I1014 13:55:00.110843   25306 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1014 13:55:00.110911   25306 kubeadm.go:310] [api-check] The API server is healthy after 5.813875513s
	I1014 13:55:00.111034   25306 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 13:55:00.111171   25306 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 13:55:00.111231   25306 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 13:55:00.111391   25306 kubeadm.go:310] [mark-control-plane] Marking the node ha-450021 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 13:55:00.111441   25306 kubeadm.go:310] [bootstrap-token] Using token: e8eaxr.5trfuyfb27hv7e11
	I1014 13:55:00.112896   25306 out.go:235]   - Configuring RBAC rules ...
	I1014 13:55:00.113020   25306 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 13:55:00.113086   25306 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 13:55:00.113219   25306 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 13:55:00.113369   25306 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 13:55:00.113527   25306 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 13:55:00.113646   25306 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 13:55:00.113778   25306 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 13:55:00.113819   25306 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1014 13:55:00.113862   25306 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1014 13:55:00.113868   25306 kubeadm.go:310] 
	I1014 13:55:00.113922   25306 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1014 13:55:00.113928   25306 kubeadm.go:310] 
	I1014 13:55:00.113997   25306 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1014 13:55:00.114004   25306 kubeadm.go:310] 
	I1014 13:55:00.114048   25306 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1014 13:55:00.114129   25306 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 13:55:00.114180   25306 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 13:55:00.114188   25306 kubeadm.go:310] 
	I1014 13:55:00.114245   25306 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1014 13:55:00.114263   25306 kubeadm.go:310] 
	I1014 13:55:00.114330   25306 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 13:55:00.114341   25306 kubeadm.go:310] 
	I1014 13:55:00.114411   25306 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1014 13:55:00.114513   25306 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 13:55:00.114572   25306 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 13:55:00.114578   25306 kubeadm.go:310] 
	I1014 13:55:00.114693   25306 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 13:55:00.114784   25306 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1014 13:55:00.114793   25306 kubeadm.go:310] 
	I1014 13:55:00.114891   25306 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token e8eaxr.5trfuyfb27hv7e11 \
	I1014 13:55:00.114977   25306 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 \
	I1014 13:55:00.114998   25306 kubeadm.go:310] 	--control-plane 
	I1014 13:55:00.115002   25306 kubeadm.go:310] 
	I1014 13:55:00.115074   25306 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1014 13:55:00.115080   25306 kubeadm.go:310] 
	I1014 13:55:00.115154   25306 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token e8eaxr.5trfuyfb27hv7e11 \
	I1014 13:55:00.115275   25306 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 
	I1014 13:55:00.115292   25306 cni.go:84] Creating CNI manager for ""
	I1014 13:55:00.115302   25306 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1014 13:55:00.117091   25306 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1014 13:55:00.118483   25306 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1014 13:55:00.124368   25306 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1014 13:55:00.124388   25306 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1014 13:55:00.145958   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1014 13:55:00.528887   25306 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 13:55:00.528967   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:55:00.528987   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-450021 minikube.k8s.io/updated_at=2024_10_14T13_55_00_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=ha-450021 minikube.k8s.io/primary=true
	I1014 13:55:00.543744   25306 ops.go:34] apiserver oom_adj: -16
	I1014 13:55:00.662237   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:55:01.162275   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:55:01.662698   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:55:02.163027   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:55:02.662525   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:55:03.162972   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:55:03.662524   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:55:03.751160   25306 kubeadm.go:1113] duration metric: took 3.222260966s to wait for elevateKubeSystemPrivileges
	I1014 13:55:03.751200   25306 kubeadm.go:394] duration metric: took 14.945849765s to StartCluster
	I1014 13:55:03.751222   25306 settings.go:142] acquiring lock: {Name:mk9f6f6b9dc8c3435472077cbb9091b8e648d1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:55:03.751304   25306 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 13:55:03.752000   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/kubeconfig: {Name:mk17dd47b52fd7a8ee17f563c35b22ef1b7788f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:55:03.752256   25306 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 13:55:03.752277   25306 start.go:241] waiting for startup goroutines ...
	I1014 13:55:03.752262   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1014 13:55:03.752277   25306 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 13:55:03.752370   25306 addons.go:69] Setting storage-provisioner=true in profile "ha-450021"
	I1014 13:55:03.752388   25306 addons.go:234] Setting addon storage-provisioner=true in "ha-450021"
	I1014 13:55:03.752407   25306 addons.go:69] Setting default-storageclass=true in profile "ha-450021"
	I1014 13:55:03.752422   25306 host.go:66] Checking if "ha-450021" exists ...
	I1014 13:55:03.752435   25306 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:55:03.752440   25306 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-450021"
	I1014 13:55:03.752851   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:55:03.752853   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:55:03.752892   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:55:03.752907   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:55:03.768120   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40745
	I1014 13:55:03.768294   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36817
	I1014 13:55:03.768559   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:55:03.768773   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:55:03.769132   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:55:03.769156   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:55:03.769285   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:55:03.769308   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:55:03.769488   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:55:03.769594   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:55:03.769745   25306 main.go:141] libmachine: (ha-450021) Calling .GetState
	I1014 13:55:03.770040   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:55:03.770082   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:55:03.771657   25306 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 13:55:03.771868   25306 kapi.go:59] client config for ha-450021: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.crt", KeyFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.key", CAFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432aa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 13:55:03.772274   25306 cert_rotation.go:140] Starting client certificate rotation controller
	I1014 13:55:03.772426   25306 addons.go:234] Setting addon default-storageclass=true in "ha-450021"
	I1014 13:55:03.772458   25306 host.go:66] Checking if "ha-450021" exists ...
	I1014 13:55:03.772689   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:55:03.772720   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:55:03.785301   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39049
	I1014 13:55:03.785754   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:55:03.786274   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:55:03.786301   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:55:03.786653   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:55:03.786685   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37795
	I1014 13:55:03.786852   25306 main.go:141] libmachine: (ha-450021) Calling .GetState
	I1014 13:55:03.787134   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:55:03.787596   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:55:03.787621   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:55:03.787924   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:55:03.788463   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:55:03.788507   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:55:03.788527   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:55:03.790666   25306 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 13:55:03.791877   25306 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 13:55:03.791892   25306 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 13:55:03.791905   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:55:03.794484   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:55:03.794853   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:55:03.794881   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:55:03.794998   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:55:03.795150   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:55:03.795298   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:55:03.795425   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 13:55:03.804082   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36387
	I1014 13:55:03.804475   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:55:03.804871   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:55:03.804893   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:55:03.805154   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:55:03.805296   25306 main.go:141] libmachine: (ha-450021) Calling .GetState
	I1014 13:55:03.806617   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:55:03.806811   25306 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 13:55:03.806824   25306 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 13:55:03.806838   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:55:03.809334   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:55:03.809735   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:55:03.809764   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:55:03.809917   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:55:03.810083   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:55:03.810214   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:55:03.810346   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 13:55:03.916382   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1014 13:55:03.970762   25306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 13:55:04.045876   25306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 13:55:04.562851   25306 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1014 13:55:04.828250   25306 main.go:141] libmachine: Making call to close driver server
	I1014 13:55:04.828267   25306 main.go:141] libmachine: Making call to close driver server
	I1014 13:55:04.828285   25306 main.go:141] libmachine: (ha-450021) Calling .Close
	I1014 13:55:04.828272   25306 main.go:141] libmachine: (ha-450021) Calling .Close
	I1014 13:55:04.828566   25306 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:55:04.828578   25306 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:55:04.828586   25306 main.go:141] libmachine: Making call to close driver server
	I1014 13:55:04.828592   25306 main.go:141] libmachine: (ha-450021) Calling .Close
	I1014 13:55:04.828628   25306 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:55:04.828642   25306 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:55:04.828650   25306 main.go:141] libmachine: Making call to close driver server
	I1014 13:55:04.828657   25306 main.go:141] libmachine: (ha-450021) Calling .Close
	I1014 13:55:04.828760   25306 main.go:141] libmachine: (ha-450021) DBG | Closing plugin on server side
	I1014 13:55:04.828781   25306 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:55:04.828790   25306 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:55:04.830286   25306 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:55:04.830303   25306 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:55:04.830318   25306 main.go:141] libmachine: (ha-450021) DBG | Closing plugin on server side
	I1014 13:55:04.830357   25306 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 13:55:04.830377   25306 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 13:55:04.830467   25306 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1014 13:55:04.830477   25306 round_trippers.go:469] Request Headers:
	I1014 13:55:04.830487   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:55:04.830500   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:55:04.851944   25306 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I1014 13:55:04.852525   25306 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1014 13:55:04.852541   25306 round_trippers.go:469] Request Headers:
	I1014 13:55:04.852549   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:55:04.852558   25306 round_trippers.go:473]     Content-Type: application/json
	I1014 13:55:04.852569   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:55:04.860873   25306 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1014 13:55:04.863865   25306 main.go:141] libmachine: Making call to close driver server
	I1014 13:55:04.863890   25306 main.go:141] libmachine: (ha-450021) Calling .Close
	I1014 13:55:04.864194   25306 main.go:141] libmachine: (ha-450021) DBG | Closing plugin on server side
	I1014 13:55:04.864235   25306 main.go:141] libmachine: Successfully made call to close driver server
	I1014 13:55:04.864246   25306 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 13:55:04.865910   25306 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1014 13:55:04.867207   25306 addons.go:510] duration metric: took 1.114927542s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1014 13:55:04.867245   25306 start.go:246] waiting for cluster config update ...
	I1014 13:55:04.867260   25306 start.go:255] writing updated cluster config ...
	I1014 13:55:04.868981   25306 out.go:201] 
	I1014 13:55:04.870358   25306 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:55:04.870432   25306 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/config.json ...
	I1014 13:55:04.871998   25306 out.go:177] * Starting "ha-450021-m02" control-plane node in "ha-450021" cluster
	I1014 13:55:04.873148   25306 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 13:55:04.873168   25306 cache.go:56] Caching tarball of preloaded images
	I1014 13:55:04.873259   25306 preload.go:172] Found /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 13:55:04.873270   25306 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1014 13:55:04.873348   25306 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/config.json ...
	I1014 13:55:04.873725   25306 start.go:360] acquireMachinesLock for ha-450021-m02: {Name:mk0b928e3a7c606d1470b2064477a22c5e59969d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 13:55:04.873773   25306 start.go:364] duration metric: took 27.606µs to acquireMachinesLock for "ha-450021-m02"
	I1014 13:55:04.873797   25306 start.go:93] Provisioning new machine with config: &{Name:ha-450021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 13:55:04.873856   25306 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1014 13:55:04.875450   25306 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 13:55:04.875534   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:55:04.875571   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:55:04.891858   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40523
	I1014 13:55:04.892468   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:55:04.893080   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:55:04.893101   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:55:04.893416   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:55:04.893639   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetMachineName
	I1014 13:55:04.893812   25306 main.go:141] libmachine: (ha-450021-m02) Calling .DriverName
	I1014 13:55:04.894009   25306 start.go:159] libmachine.API.Create for "ha-450021" (driver="kvm2")
	I1014 13:55:04.894037   25306 client.go:168] LocalClient.Create starting
	I1014 13:55:04.894069   25306 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem
	I1014 13:55:04.894114   25306 main.go:141] libmachine: Decoding PEM data...
	I1014 13:55:04.894134   25306 main.go:141] libmachine: Parsing certificate...
	I1014 13:55:04.894211   25306 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem
	I1014 13:55:04.894240   25306 main.go:141] libmachine: Decoding PEM data...
	I1014 13:55:04.894258   25306 main.go:141] libmachine: Parsing certificate...
	I1014 13:55:04.894285   25306 main.go:141] libmachine: Running pre-create checks...
	I1014 13:55:04.894306   25306 main.go:141] libmachine: (ha-450021-m02) Calling .PreCreateCheck
	I1014 13:55:04.894485   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetConfigRaw
	I1014 13:55:04.894889   25306 main.go:141] libmachine: Creating machine...
	I1014 13:55:04.894903   25306 main.go:141] libmachine: (ha-450021-m02) Calling .Create
	I1014 13:55:04.895072   25306 main.go:141] libmachine: (ha-450021-m02) Creating KVM machine...
	I1014 13:55:04.896272   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found existing default KVM network
	I1014 13:55:04.896429   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found existing private KVM network mk-ha-450021
	I1014 13:55:04.896566   25306 main.go:141] libmachine: (ha-450021-m02) Setting up store path in /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02 ...
	I1014 13:55:04.896592   25306 main.go:141] libmachine: (ha-450021-m02) Building disk image from file:///home/jenkins/minikube-integration/19790-7836/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1014 13:55:04.896679   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:04.896574   25672 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 13:55:04.896767   25306 main.go:141] libmachine: (ha-450021-m02) Downloading /home/jenkins/minikube-integration/19790-7836/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19790-7836/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1014 13:55:05.156236   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:05.156095   25672 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/id_rsa...
	I1014 13:55:05.229289   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:05.229176   25672 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/ha-450021-m02.rawdisk...
	I1014 13:55:05.229317   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Writing magic tar header
	I1014 13:55:05.229327   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Writing SSH key tar header
	I1014 13:55:05.229334   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:05.229291   25672 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02 ...
	I1014 13:55:05.229448   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02
	I1014 13:55:05.229476   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube/machines
	I1014 13:55:05.229494   25306 main.go:141] libmachine: (ha-450021-m02) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02 (perms=drwx------)
	I1014 13:55:05.229512   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 13:55:05.229525   25306 main.go:141] libmachine: (ha-450021-m02) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube/machines (perms=drwxr-xr-x)
	I1014 13:55:05.229536   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836
	I1014 13:55:05.229551   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1014 13:55:05.229562   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Checking permissions on dir: /home/jenkins
	I1014 13:55:05.229576   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Checking permissions on dir: /home
	I1014 13:55:05.229584   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Skipping /home - not owner
	I1014 13:55:05.229634   25306 main.go:141] libmachine: (ha-450021-m02) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube (perms=drwxr-xr-x)
	I1014 13:55:05.229673   25306 main.go:141] libmachine: (ha-450021-m02) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836 (perms=drwxrwxr-x)
	I1014 13:55:05.229699   25306 main.go:141] libmachine: (ha-450021-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1014 13:55:05.229714   25306 main.go:141] libmachine: (ha-450021-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1014 13:55:05.229724   25306 main.go:141] libmachine: (ha-450021-m02) Creating domain...
	I1014 13:55:05.230559   25306 main.go:141] libmachine: (ha-450021-m02) define libvirt domain using xml: 
	I1014 13:55:05.230582   25306 main.go:141] libmachine: (ha-450021-m02) <domain type='kvm'>
	I1014 13:55:05.230608   25306 main.go:141] libmachine: (ha-450021-m02)   <name>ha-450021-m02</name>
	I1014 13:55:05.230626   25306 main.go:141] libmachine: (ha-450021-m02)   <memory unit='MiB'>2200</memory>
	I1014 13:55:05.230636   25306 main.go:141] libmachine: (ha-450021-m02)   <vcpu>2</vcpu>
	I1014 13:55:05.230650   25306 main.go:141] libmachine: (ha-450021-m02)   <features>
	I1014 13:55:05.230660   25306 main.go:141] libmachine: (ha-450021-m02)     <acpi/>
	I1014 13:55:05.230666   25306 main.go:141] libmachine: (ha-450021-m02)     <apic/>
	I1014 13:55:05.230676   25306 main.go:141] libmachine: (ha-450021-m02)     <pae/>
	I1014 13:55:05.230682   25306 main.go:141] libmachine: (ha-450021-m02)     
	I1014 13:55:05.230689   25306 main.go:141] libmachine: (ha-450021-m02)   </features>
	I1014 13:55:05.230699   25306 main.go:141] libmachine: (ha-450021-m02)   <cpu mode='host-passthrough'>
	I1014 13:55:05.230706   25306 main.go:141] libmachine: (ha-450021-m02)   
	I1014 13:55:05.230711   25306 main.go:141] libmachine: (ha-450021-m02)   </cpu>
	I1014 13:55:05.230718   25306 main.go:141] libmachine: (ha-450021-m02)   <os>
	I1014 13:55:05.230728   25306 main.go:141] libmachine: (ha-450021-m02)     <type>hvm</type>
	I1014 13:55:05.230739   25306 main.go:141] libmachine: (ha-450021-m02)     <boot dev='cdrom'/>
	I1014 13:55:05.230748   25306 main.go:141] libmachine: (ha-450021-m02)     <boot dev='hd'/>
	I1014 13:55:05.230763   25306 main.go:141] libmachine: (ha-450021-m02)     <bootmenu enable='no'/>
	I1014 13:55:05.230773   25306 main.go:141] libmachine: (ha-450021-m02)   </os>
	I1014 13:55:05.230780   25306 main.go:141] libmachine: (ha-450021-m02)   <devices>
	I1014 13:55:05.230790   25306 main.go:141] libmachine: (ha-450021-m02)     <disk type='file' device='cdrom'>
	I1014 13:55:05.230819   25306 main.go:141] libmachine: (ha-450021-m02)       <source file='/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/boot2docker.iso'/>
	I1014 13:55:05.230839   25306 main.go:141] libmachine: (ha-450021-m02)       <target dev='hdc' bus='scsi'/>
	I1014 13:55:05.230847   25306 main.go:141] libmachine: (ha-450021-m02)       <readonly/>
	I1014 13:55:05.230854   25306 main.go:141] libmachine: (ha-450021-m02)     </disk>
	I1014 13:55:05.230864   25306 main.go:141] libmachine: (ha-450021-m02)     <disk type='file' device='disk'>
	I1014 13:55:05.230881   25306 main.go:141] libmachine: (ha-450021-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1014 13:55:05.230897   25306 main.go:141] libmachine: (ha-450021-m02)       <source file='/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/ha-450021-m02.rawdisk'/>
	I1014 13:55:05.230912   25306 main.go:141] libmachine: (ha-450021-m02)       <target dev='hda' bus='virtio'/>
	I1014 13:55:05.230923   25306 main.go:141] libmachine: (ha-450021-m02)     </disk>
	I1014 13:55:05.230933   25306 main.go:141] libmachine: (ha-450021-m02)     <interface type='network'>
	I1014 13:55:05.230942   25306 main.go:141] libmachine: (ha-450021-m02)       <source network='mk-ha-450021'/>
	I1014 13:55:05.230949   25306 main.go:141] libmachine: (ha-450021-m02)       <model type='virtio'/>
	I1014 13:55:05.230956   25306 main.go:141] libmachine: (ha-450021-m02)     </interface>
	I1014 13:55:05.230966   25306 main.go:141] libmachine: (ha-450021-m02)     <interface type='network'>
	I1014 13:55:05.230975   25306 main.go:141] libmachine: (ha-450021-m02)       <source network='default'/>
	I1014 13:55:05.230987   25306 main.go:141] libmachine: (ha-450021-m02)       <model type='virtio'/>
	I1014 13:55:05.230998   25306 main.go:141] libmachine: (ha-450021-m02)     </interface>
	I1014 13:55:05.231008   25306 main.go:141] libmachine: (ha-450021-m02)     <serial type='pty'>
	I1014 13:55:05.231016   25306 main.go:141] libmachine: (ha-450021-m02)       <target port='0'/>
	I1014 13:55:05.231026   25306 main.go:141] libmachine: (ha-450021-m02)     </serial>
	I1014 13:55:05.231034   25306 main.go:141] libmachine: (ha-450021-m02)     <console type='pty'>
	I1014 13:55:05.231042   25306 main.go:141] libmachine: (ha-450021-m02)       <target type='serial' port='0'/>
	I1014 13:55:05.231047   25306 main.go:141] libmachine: (ha-450021-m02)     </console>
	I1014 13:55:05.231060   25306 main.go:141] libmachine: (ha-450021-m02)     <rng model='virtio'>
	I1014 13:55:05.231073   25306 main.go:141] libmachine: (ha-450021-m02)       <backend model='random'>/dev/random</backend>
	I1014 13:55:05.231079   25306 main.go:141] libmachine: (ha-450021-m02)     </rng>
	I1014 13:55:05.231090   25306 main.go:141] libmachine: (ha-450021-m02)     
	I1014 13:55:05.231096   25306 main.go:141] libmachine: (ha-450021-m02)     
	I1014 13:55:05.231107   25306 main.go:141] libmachine: (ha-450021-m02)   </devices>
	I1014 13:55:05.231116   25306 main.go:141] libmachine: (ha-450021-m02) </domain>
	I1014 13:55:05.231125   25306 main.go:141] libmachine: (ha-450021-m02) 
	I1014 13:55:05.238505   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:39:fb:46 in network default
	I1014 13:55:05.239084   25306 main.go:141] libmachine: (ha-450021-m02) Ensuring networks are active...
	I1014 13:55:05.239109   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:05.239788   25306 main.go:141] libmachine: (ha-450021-m02) Ensuring network default is active
	I1014 13:55:05.240113   25306 main.go:141] libmachine: (ha-450021-m02) Ensuring network mk-ha-450021 is active
	I1014 13:55:05.240488   25306 main.go:141] libmachine: (ha-450021-m02) Getting domain xml...
	I1014 13:55:05.241224   25306 main.go:141] libmachine: (ha-450021-m02) Creating domain...
	I1014 13:55:06.508569   25306 main.go:141] libmachine: (ha-450021-m02) Waiting to get IP...
	I1014 13:55:06.509274   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:06.509728   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:06.509800   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:06.509721   25672 retry.go:31] will retry after 253.994001ms: waiting for machine to come up
	I1014 13:55:06.765296   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:06.765720   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:06.765754   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:06.765695   25672 retry.go:31] will retry after 330.390593ms: waiting for machine to come up
	I1014 13:55:07.097342   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:07.097779   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:07.097809   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:07.097725   25672 retry.go:31] will retry after 315.743674ms: waiting for machine to come up
	I1014 13:55:07.414954   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:07.415551   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:07.415596   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:07.415518   25672 retry.go:31] will retry after 505.396104ms: waiting for machine to come up
	I1014 13:55:07.922086   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:07.922530   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:07.922555   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:07.922518   25672 retry.go:31] will retry after 762.026701ms: waiting for machine to come up
	I1014 13:55:08.686471   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:08.686874   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:08.686903   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:08.686842   25672 retry.go:31] will retry after 891.989591ms: waiting for machine to come up
	I1014 13:55:09.580677   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:09.581174   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:09.581195   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:09.581150   25672 retry.go:31] will retry after 716.006459ms: waiting for machine to come up
	I1014 13:55:10.299036   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:10.299435   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:10.299462   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:10.299390   25672 retry.go:31] will retry after 999.038321ms: waiting for machine to come up
	I1014 13:55:11.299678   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:11.300155   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:11.300182   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:11.300092   25672 retry.go:31] will retry after 1.384319167s: waiting for machine to come up
	I1014 13:55:12.686664   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:12.687084   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:12.687130   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:12.687031   25672 retry.go:31] will retry after 1.750600606s: waiting for machine to come up
	I1014 13:55:14.439721   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:14.440157   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:14.440185   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:14.440132   25672 retry.go:31] will retry after 2.719291498s: waiting for machine to come up
	I1014 13:55:17.160916   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:17.161338   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:17.161359   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:17.161288   25672 retry.go:31] will retry after 2.934487947s: waiting for machine to come up
	I1014 13:55:20.097623   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:20.098033   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:20.098054   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:20.097994   25672 retry.go:31] will retry after 3.495468914s: waiting for machine to come up
	I1014 13:55:23.597556   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:23.598084   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find current IP address of domain ha-450021-m02 in network mk-ha-450021
	I1014 13:55:23.598105   25306 main.go:141] libmachine: (ha-450021-m02) DBG | I1014 13:55:23.598043   25672 retry.go:31] will retry after 4.955902252s: waiting for machine to come up
	I1014 13:55:28.555767   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:28.556335   25306 main.go:141] libmachine: (ha-450021-m02) Found IP for machine: 192.168.39.89
	I1014 13:55:28.556360   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has current primary IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:28.556369   25306 main.go:141] libmachine: (ha-450021-m02) Reserving static IP address...
	I1014 13:55:28.556652   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find host DHCP lease matching {name: "ha-450021-m02", mac: "52:54:00:51:58:78", ip: "192.168.39.89"} in network mk-ha-450021
	I1014 13:55:28.627598   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Getting to WaitForSSH function...
	I1014 13:55:28.627633   25306 main.go:141] libmachine: (ha-450021-m02) Reserved static IP address: 192.168.39.89
	I1014 13:55:28.627646   25306 main.go:141] libmachine: (ha-450021-m02) Waiting for SSH to be available...
	I1014 13:55:28.629843   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:28.630161   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021
	I1014 13:55:28.630190   25306 main.go:141] libmachine: (ha-450021-m02) DBG | unable to find defined IP address of network mk-ha-450021 interface with MAC address 52:54:00:51:58:78
	I1014 13:55:28.630310   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Using SSH client type: external
	I1014 13:55:28.630337   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/id_rsa (-rw-------)
	I1014 13:55:28.630368   25306 main.go:141] libmachine: (ha-450021-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 13:55:28.630381   25306 main.go:141] libmachine: (ha-450021-m02) DBG | About to run SSH command:
	I1014 13:55:28.630396   25306 main.go:141] libmachine: (ha-450021-m02) DBG | exit 0
	I1014 13:55:28.634134   25306 main.go:141] libmachine: (ha-450021-m02) DBG | SSH cmd err, output: exit status 255: 
	I1014 13:55:28.634150   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1014 13:55:28.634157   25306 main.go:141] libmachine: (ha-450021-m02) DBG | command : exit 0
	I1014 13:55:28.634162   25306 main.go:141] libmachine: (ha-450021-m02) DBG | err     : exit status 255
	I1014 13:55:28.634170   25306 main.go:141] libmachine: (ha-450021-m02) DBG | output  : 
	I1014 13:55:31.634385   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Getting to WaitForSSH function...
	I1014 13:55:31.636814   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:31.637121   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:31.637150   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:31.637249   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Using SSH client type: external
	I1014 13:55:31.637272   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/id_rsa (-rw-------)
	I1014 13:55:31.637290   25306 main.go:141] libmachine: (ha-450021-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.89 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 13:55:31.637302   25306 main.go:141] libmachine: (ha-450021-m02) DBG | About to run SSH command:
	I1014 13:55:31.637327   25306 main.go:141] libmachine: (ha-450021-m02) DBG | exit 0
	I1014 13:55:31.762693   25306 main.go:141] libmachine: (ha-450021-m02) DBG | SSH cmd err, output: <nil>: 
	I1014 13:55:31.762993   25306 main.go:141] libmachine: (ha-450021-m02) KVM machine creation complete!
	I1014 13:55:31.763308   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetConfigRaw
	I1014 13:55:31.763786   25306 main.go:141] libmachine: (ha-450021-m02) Calling .DriverName
	I1014 13:55:31.763969   25306 main.go:141] libmachine: (ha-450021-m02) Calling .DriverName
	I1014 13:55:31.764130   25306 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1014 13:55:31.764154   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetState
	I1014 13:55:31.765484   25306 main.go:141] libmachine: Detecting operating system of created instance...
	I1014 13:55:31.765498   25306 main.go:141] libmachine: Waiting for SSH to be available...
	I1014 13:55:31.765506   25306 main.go:141] libmachine: Getting to WaitForSSH function...
	I1014 13:55:31.765513   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	I1014 13:55:31.767968   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:31.768352   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:31.768386   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:31.768540   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHPort
	I1014 13:55:31.768701   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:31.768883   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:31.769050   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHUsername
	I1014 13:55:31.769231   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:55:31.769460   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1014 13:55:31.769474   25306 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1014 13:55:31.877746   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 13:55:31.877770   25306 main.go:141] libmachine: Detecting the provisioner...
	I1014 13:55:31.877779   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	I1014 13:55:31.880489   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:31.880858   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:31.880884   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:31.881034   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHPort
	I1014 13:55:31.881200   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:31.881337   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:31.881482   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHUsername
	I1014 13:55:31.881602   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:55:31.881767   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1014 13:55:31.881780   25306 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1014 13:55:31.995447   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1014 13:55:31.995515   25306 main.go:141] libmachine: found compatible host: buildroot
	I1014 13:55:31.995529   25306 main.go:141] libmachine: Provisioning with buildroot...
	I1014 13:55:31.995541   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetMachineName
	I1014 13:55:31.995787   25306 buildroot.go:166] provisioning hostname "ha-450021-m02"
	I1014 13:55:31.995817   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetMachineName
	I1014 13:55:31.995999   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	I1014 13:55:31.998434   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:31.998820   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:31.998841   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:31.998986   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHPort
	I1014 13:55:31.999184   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:31.999375   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:31.999496   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHUsername
	I1014 13:55:31.999675   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:55:31.999836   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1014 13:55:31.999847   25306 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-450021-m02 && echo "ha-450021-m02" | sudo tee /etc/hostname
	I1014 13:55:32.125055   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-450021-m02
	
	I1014 13:55:32.125093   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	I1014 13:55:32.128764   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.129158   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:32.129191   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.129369   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHPort
	I1014 13:55:32.129548   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:32.129704   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:32.129831   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHUsername
	I1014 13:55:32.129997   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:55:32.130195   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1014 13:55:32.130212   25306 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-450021-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-450021-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-450021-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 13:55:32.251676   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 13:55:32.251705   25306 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 13:55:32.251731   25306 buildroot.go:174] setting up certificates
	I1014 13:55:32.251744   25306 provision.go:84] configureAuth start
	I1014 13:55:32.251763   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetMachineName
	I1014 13:55:32.252028   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetIP
	I1014 13:55:32.254513   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.254862   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:32.254887   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.255045   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	I1014 13:55:32.257083   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.257408   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:32.257435   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.257565   25306 provision.go:143] copyHostCerts
	I1014 13:55:32.257592   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 13:55:32.257618   25306 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 13:55:32.257629   25306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 13:55:32.257712   25306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 13:55:32.257797   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 13:55:32.257821   25306 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 13:55:32.257831   25306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 13:55:32.257870   25306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 13:55:32.257928   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 13:55:32.257951   25306 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 13:55:32.257959   25306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 13:55:32.257986   25306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 13:55:32.258053   25306 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.ha-450021-m02 san=[127.0.0.1 192.168.39.89 ha-450021-m02 localhost minikube]
	I1014 13:55:32.418210   25306 provision.go:177] copyRemoteCerts
	I1014 13:55:32.418267   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 13:55:32.418287   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	I1014 13:55:32.421033   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.421356   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:32.421387   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.421587   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHPort
	I1014 13:55:32.421794   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:32.421949   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHUsername
	I1014 13:55:32.422067   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/id_rsa Username:docker}
	I1014 13:55:32.508850   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 13:55:32.508917   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 13:55:32.534047   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 13:55:32.534120   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1014 13:55:32.558263   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 13:55:32.558335   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 13:55:32.582102   25306 provision.go:87] duration metric: took 330.342541ms to configureAuth
	I1014 13:55:32.582134   25306 buildroot.go:189] setting minikube options for container-runtime
	I1014 13:55:32.582301   25306 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:55:32.582371   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	I1014 13:55:32.584832   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.585166   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:32.585192   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.585349   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHPort
	I1014 13:55:32.585528   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:32.585644   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:32.585802   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHUsername
	I1014 13:55:32.585929   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:55:32.586092   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1014 13:55:32.586111   25306 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 13:55:32.822330   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 13:55:32.822358   25306 main.go:141] libmachine: Checking connection to Docker...
	I1014 13:55:32.822366   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetURL
	I1014 13:55:32.823614   25306 main.go:141] libmachine: (ha-450021-m02) DBG | Using libvirt version 6000000
	I1014 13:55:32.826190   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.826546   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:32.826567   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.826737   25306 main.go:141] libmachine: Docker is up and running!
	I1014 13:55:32.826754   25306 main.go:141] libmachine: Reticulating splines...
	I1014 13:55:32.826772   25306 client.go:171] duration metric: took 27.932717671s to LocalClient.Create
	I1014 13:55:32.826803   25306 start.go:167] duration metric: took 27.93279451s to libmachine.API.Create "ha-450021"
	I1014 13:55:32.826815   25306 start.go:293] postStartSetup for "ha-450021-m02" (driver="kvm2")
	I1014 13:55:32.826825   25306 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 13:55:32.826846   25306 main.go:141] libmachine: (ha-450021-m02) Calling .DriverName
	I1014 13:55:32.827073   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 13:55:32.827097   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	I1014 13:55:32.829440   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.829745   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:32.829785   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.829885   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHPort
	I1014 13:55:32.830054   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:32.830208   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHUsername
	I1014 13:55:32.830348   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/id_rsa Username:docker}
	I1014 13:55:32.918434   25306 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 13:55:32.922919   25306 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 13:55:32.922947   25306 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 13:55:32.923010   25306 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 13:55:32.923092   25306 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 13:55:32.923101   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> /etc/ssl/certs/150232.pem
	I1014 13:55:32.923187   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 13:55:32.933129   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 13:55:32.957819   25306 start.go:296] duration metric: took 130.989484ms for postStartSetup
	I1014 13:55:32.957871   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetConfigRaw
	I1014 13:55:32.958438   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetIP
	I1014 13:55:32.961024   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.961393   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:32.961425   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.961630   25306 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/config.json ...
	I1014 13:55:32.961835   25306 start.go:128] duration metric: took 28.087968814s to createHost
	I1014 13:55:32.961858   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	I1014 13:55:32.964121   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.964493   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:32.964528   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:32.964702   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHPort
	I1014 13:55:32.964854   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:32.964966   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:32.965109   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHUsername
	I1014 13:55:32.965227   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:55:32.965432   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.89 22 <nil> <nil>}
	I1014 13:55:32.965446   25306 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 13:55:33.079362   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728914133.060490571
	
	I1014 13:55:33.079386   25306 fix.go:216] guest clock: 1728914133.060490571
	I1014 13:55:33.079405   25306 fix.go:229] Guest: 2024-10-14 13:55:33.060490571 +0000 UTC Remote: 2024-10-14 13:55:32.961847349 +0000 UTC m=+73.185560400 (delta=98.643222ms)
	I1014 13:55:33.079425   25306 fix.go:200] guest clock delta is within tolerance: 98.643222ms
	I1014 13:55:33.079431   25306 start.go:83] releasing machines lock for "ha-450021-m02", held for 28.205646747s
	I1014 13:55:33.079452   25306 main.go:141] libmachine: (ha-450021-m02) Calling .DriverName
	I1014 13:55:33.079689   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetIP
	I1014 13:55:33.082245   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:33.082619   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:33.082645   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:33.085035   25306 out.go:177] * Found network options:
	I1014 13:55:33.086426   25306 out.go:177]   - NO_PROXY=192.168.39.176
	W1014 13:55:33.087574   25306 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 13:55:33.087613   25306 main.go:141] libmachine: (ha-450021-m02) Calling .DriverName
	I1014 13:55:33.088138   25306 main.go:141] libmachine: (ha-450021-m02) Calling .DriverName
	I1014 13:55:33.088304   25306 main.go:141] libmachine: (ha-450021-m02) Calling .DriverName
	I1014 13:55:33.088401   25306 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 13:55:33.088445   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	W1014 13:55:33.088467   25306 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 13:55:33.088536   25306 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 13:55:33.088557   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHHostname
	I1014 13:55:33.091084   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:33.091105   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:33.091497   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:33.091525   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:33.091546   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:33.091562   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:33.091675   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHPort
	I1014 13:55:33.091813   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHPort
	I1014 13:55:33.091867   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:33.091959   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHKeyPath
	I1014 13:55:33.092027   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHUsername
	I1014 13:55:33.092088   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetSSHUsername
	I1014 13:55:33.092156   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/id_rsa Username:docker}
	I1014 13:55:33.092203   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.89 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m02/id_rsa Username:docker}
	I1014 13:55:33.324240   25306 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 13:55:33.330527   25306 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 13:55:33.330586   25306 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 13:55:33.345640   25306 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 13:55:33.345657   25306 start.go:495] detecting cgroup driver to use...
	I1014 13:55:33.345701   25306 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 13:55:33.361741   25306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 13:55:33.375019   25306 docker.go:217] disabling cri-docker service (if available) ...
	I1014 13:55:33.375071   25306 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 13:55:33.388301   25306 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 13:55:33.401227   25306 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 13:55:33.511329   25306 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 13:55:33.658848   25306 docker.go:233] disabling docker service ...
	I1014 13:55:33.658913   25306 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 13:55:33.673279   25306 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 13:55:33.685917   25306 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 13:55:33.818316   25306 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 13:55:33.936222   25306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 13:55:33.950467   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 13:55:33.970208   25306 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 13:55:33.970265   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:55:33.984110   25306 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 13:55:33.984169   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:55:33.995549   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:55:34.006565   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:55:34.018479   25306 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 13:55:34.030013   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:55:34.041645   25306 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:55:34.059707   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:55:34.070442   25306 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 13:55:34.080309   25306 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 13:55:34.080366   25306 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 13:55:34.093735   25306 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 13:55:34.103445   25306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:55:34.215901   25306 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 13:55:34.308754   25306 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 13:55:34.308820   25306 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 13:55:34.313625   25306 start.go:563] Will wait 60s for crictl version
	I1014 13:55:34.313676   25306 ssh_runner.go:195] Run: which crictl
	I1014 13:55:34.317635   25306 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 13:55:34.356534   25306 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 13:55:34.356604   25306 ssh_runner.go:195] Run: crio --version
	I1014 13:55:34.384187   25306 ssh_runner.go:195] Run: crio --version
	I1014 13:55:34.414404   25306 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1014 13:55:34.415699   25306 out.go:177]   - env NO_PROXY=192.168.39.176
	I1014 13:55:34.416965   25306 main.go:141] libmachine: (ha-450021-m02) Calling .GetIP
	I1014 13:55:34.419296   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:34.419601   25306 main.go:141] libmachine: (ha-450021-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:51:58:78", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:55:19 +0000 UTC Type:0 Mac:52:54:00:51:58:78 Iaid: IPaddr:192.168.39.89 Prefix:24 Hostname:ha-450021-m02 Clientid:01:52:54:00:51:58:78}
	I1014 13:55:34.419628   25306 main.go:141] libmachine: (ha-450021-m02) DBG | domain ha-450021-m02 has defined IP address 192.168.39.89 and MAC address 52:54:00:51:58:78 in network mk-ha-450021
	I1014 13:55:34.419811   25306 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1014 13:55:34.423754   25306 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 13:55:34.435980   25306 mustload.go:65] Loading cluster: ha-450021
	I1014 13:55:34.436151   25306 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:55:34.436381   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:55:34.436419   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:55:34.450826   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35637
	I1014 13:55:34.451213   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:55:34.451655   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:55:34.451677   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:55:34.451944   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:55:34.452123   25306 main.go:141] libmachine: (ha-450021) Calling .GetState
	I1014 13:55:34.453521   25306 host.go:66] Checking if "ha-450021" exists ...
	I1014 13:55:34.453781   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:55:34.453811   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:55:34.467708   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35033
	I1014 13:55:34.468144   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:55:34.468583   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:55:34.468597   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:55:34.468863   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:55:34.469023   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:55:34.469168   25306 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021 for IP: 192.168.39.89
	I1014 13:55:34.469180   25306 certs.go:194] generating shared ca certs ...
	I1014 13:55:34.469197   25306 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:55:34.469314   25306 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 13:55:34.469365   25306 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 13:55:34.469378   25306 certs.go:256] generating profile certs ...
	I1014 13:55:34.469463   25306 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.key
	I1014 13:55:34.469494   25306 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.ffb9c796
	I1014 13:55:34.469515   25306 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.ffb9c796 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.176 192.168.39.89 192.168.39.254]
	I1014 13:55:34.810302   25306 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.ffb9c796 ...
	I1014 13:55:34.810336   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.ffb9c796: {Name:mk62309e383c07d7599f8a1200bdc69462a2d14a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:55:34.810530   25306 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.ffb9c796 ...
	I1014 13:55:34.810549   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.ffb9c796: {Name:mkf013e40a46367f5d473382a243ff918ed6f0f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:55:34.810679   25306 certs.go:381] copying /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.ffb9c796 -> /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt
	I1014 13:55:34.810843   25306 certs.go:385] copying /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.ffb9c796 -> /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key
	I1014 13:55:34.811031   25306 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key
	I1014 13:55:34.811055   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 13:55:34.811078   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 13:55:34.811100   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 13:55:34.811122   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 13:55:34.811141   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 13:55:34.811162   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 13:55:34.811184   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 13:55:34.811205   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 13:55:34.811281   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 13:55:34.811405   25306 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 13:55:34.811439   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 13:55:34.811482   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 13:55:34.811508   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 13:55:34.811530   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 13:55:34.811573   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 13:55:34.811602   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:55:34.811623   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem -> /usr/share/ca-certificates/15023.pem
	I1014 13:55:34.811635   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> /usr/share/ca-certificates/150232.pem
	I1014 13:55:34.811667   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:55:34.814657   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:55:34.815058   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:55:34.815083   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:55:34.815262   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:55:34.815417   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:55:34.815552   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:55:34.815647   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 13:55:34.891004   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1014 13:55:34.895702   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1014 13:55:34.906613   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1014 13:55:34.910438   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1014 13:55:34.923172   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1014 13:55:34.928434   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1014 13:55:34.941440   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1014 13:55:34.946469   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1014 13:55:34.957168   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1014 13:55:34.961259   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1014 13:55:34.972556   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1014 13:55:34.980332   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1014 13:55:34.991839   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 13:55:35.019053   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 13:55:35.043395   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 13:55:35.066158   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 13:55:35.088175   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1014 13:55:35.110925   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 13:55:35.134916   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 13:55:35.158129   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 13:55:35.180405   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 13:55:35.202548   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 13:55:35.225992   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 13:55:35.249981   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1014 13:55:35.266180   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1014 13:55:35.282687   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1014 13:55:35.299271   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1014 13:55:35.316623   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1014 13:55:35.332853   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1014 13:55:35.348570   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1014 13:55:35.364739   25306 ssh_runner.go:195] Run: openssl version
	I1014 13:55:35.370372   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 13:55:35.380736   25306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 13:55:35.385152   25306 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 13:55:35.385211   25306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 13:55:35.390839   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 13:55:35.401523   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 13:55:35.412185   25306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:55:35.416457   25306 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:55:35.416547   25306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:55:35.421940   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 13:55:35.432212   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 13:55:35.442100   25306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 13:55:35.446159   25306 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 13:55:35.446196   25306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 13:55:35.451427   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 13:55:35.461211   25306 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 13:55:35.465126   25306 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 13:55:35.465175   25306 kubeadm.go:934] updating node {m02 192.168.39.89 8443 v1.31.1 crio true true} ...
	I1014 13:55:35.465273   25306 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-450021-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 13:55:35.465315   25306 kube-vip.go:115] generating kube-vip config ...
	I1014 13:55:35.465353   25306 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1014 13:55:35.480860   25306 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1014 13:55:35.480912   25306 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1014 13:55:35.480953   25306 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 13:55:35.489708   25306 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1014 13:55:35.489755   25306 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1014 13:55:35.498478   25306 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1014 13:55:35.498498   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1014 13:55:35.498541   25306 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1014 13:55:35.498556   25306 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubelet
	I1014 13:55:35.498585   25306 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubeadm
	I1014 13:55:35.502947   25306 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1014 13:55:35.502966   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1014 13:55:36.107052   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1014 13:55:36.107146   25306 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1014 13:55:36.112161   25306 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1014 13:55:36.112193   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1014 13:55:36.135646   25306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 13:55:36.156399   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1014 13:55:36.156509   25306 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1014 13:55:36.173587   25306 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1014 13:55:36.173634   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1014 13:55:36.629216   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1014 13:55:36.638544   25306 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1014 13:55:36.654373   25306 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 13:55:36.670100   25306 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1014 13:55:36.685420   25306 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1014 13:55:36.689062   25306 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 13:55:36.700413   25306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:55:36.822396   25306 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 13:55:36.840300   25306 host.go:66] Checking if "ha-450021" exists ...
	I1014 13:55:36.840777   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:55:36.840820   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:55:36.856367   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35101
	I1014 13:55:36.856879   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:55:36.857323   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:55:36.857351   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:55:36.857672   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:55:36.857841   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:55:36.857975   25306 start.go:317] joinCluster: &{Name:ha-450021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 13:55:36.858071   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1014 13:55:36.858091   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:55:36.860736   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:55:36.861146   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:55:36.861185   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:55:36.861337   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:55:36.861529   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:55:36.861694   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:55:36.861807   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 13:55:37.015771   25306 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 13:55:37.015819   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token n1vmb9.g7muq8my4o5hlpei --discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-450021-m02 --control-plane --apiserver-advertise-address=192.168.39.89 --apiserver-bind-port=8443"
	I1014 13:55:58.710606   25306 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token n1vmb9.g7muq8my4o5hlpei --discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-450021-m02 --control-plane --apiserver-advertise-address=192.168.39.89 --apiserver-bind-port=8443": (21.694741621s)
	I1014 13:55:58.710647   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1014 13:55:59.236903   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-450021-m02 minikube.k8s.io/updated_at=2024_10_14T13_55_59_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=ha-450021 minikube.k8s.io/primary=false
	I1014 13:55:59.350641   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-450021-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1014 13:55:59.452342   25306 start.go:319] duration metric: took 22.5943626s to joinCluster
	I1014 13:55:59.452418   25306 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 13:55:59.452735   25306 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:55:59.453925   25306 out.go:177] * Verifying Kubernetes components...
	I1014 13:55:59.454985   25306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:55:59.700035   25306 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 13:55:59.782880   25306 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 13:55:59.783215   25306 kapi.go:59] client config for ha-450021: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.crt", KeyFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.key", CAFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432aa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1014 13:55:59.783307   25306 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.176:8443
	I1014 13:55:59.783576   25306 node_ready.go:35] waiting up to 6m0s for node "ha-450021-m02" to be "Ready" ...
	I1014 13:55:59.783682   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:55:59.783696   25306 round_trippers.go:469] Request Headers:
	I1014 13:55:59.783707   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:55:59.783718   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:55:59.796335   25306 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1014 13:56:00.284246   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:00.284269   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:00.284281   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:00.284288   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:00.300499   25306 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I1014 13:56:00.784180   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:00.784204   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:00.784212   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:00.784217   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:00.811652   25306 round_trippers.go:574] Response Status: 200 OK in 27 milliseconds
	I1014 13:56:01.284849   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:01.284881   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:01.284893   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:01.284898   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:01.288918   25306 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 13:56:01.783917   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:01.783937   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:01.783945   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:01.783949   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:01.787799   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:01.788614   25306 node_ready.go:53] node "ha-450021-m02" has status "Ready":"False"
	I1014 13:56:02.284602   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:02.284624   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:02.284632   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:02.284642   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:02.290773   25306 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 13:56:02.783789   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:02.783815   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:02.783826   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:02.783831   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:02.788075   25306 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 13:56:03.284032   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:03.284057   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:03.284068   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:03.284074   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:03.287614   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:03.783925   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:03.783945   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:03.783953   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:03.783956   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:03.788205   25306 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 13:56:03.788893   25306 node_ready.go:53] node "ha-450021-m02" has status "Ready":"False"
	I1014 13:56:04.283968   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:04.283987   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:04.283995   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:04.283999   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:04.287325   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:04.784192   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:04.784212   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:04.784219   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:04.784225   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:04.787474   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:05.284787   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:05.284804   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:05.284813   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:05.284815   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:05.293558   25306 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1014 13:56:05.784473   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:05.784495   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:05.784505   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:05.784509   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:05.787964   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:06.283912   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:06.283936   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:06.283946   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:06.283954   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:06.286733   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:56:06.287200   25306 node_ready.go:53] node "ha-450021-m02" has status "Ready":"False"
	I1014 13:56:06.784670   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:06.784694   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:06.784706   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:06.784711   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:06.788422   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:07.283873   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:07.283901   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:07.283913   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:07.283918   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:07.286693   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:56:07.784588   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:07.784609   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:07.784617   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:07.784621   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:07.787856   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:08.284107   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:08.284126   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:08.284134   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:08.284138   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:08.287096   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:56:08.287719   25306 node_ready.go:53] node "ha-450021-m02" has status "Ready":"False"
	I1014 13:56:08.784096   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:08.784116   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:08.784124   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:08.784127   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:08.787645   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:09.284728   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:09.284752   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:09.284759   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:09.284764   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:09.288184   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:09.784057   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:09.784097   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:09.784108   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:09.784122   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:09.793007   25306 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1014 13:56:10.284378   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:10.284400   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:10.284408   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:10.284413   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:10.287852   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:10.288463   25306 node_ready.go:53] node "ha-450021-m02" has status "Ready":"False"
	I1014 13:56:10.783831   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:10.783850   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:10.783858   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:10.783862   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:10.787590   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:11.284759   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:11.284781   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:11.284790   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:11.284794   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:11.287610   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:56:11.784640   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:11.784659   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:11.784667   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:11.784672   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:11.787776   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:12.283968   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:12.283997   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:12.284009   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:12.284014   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:12.289974   25306 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 13:56:12.290779   25306 node_ready.go:53] node "ha-450021-m02" has status "Ready":"False"
	I1014 13:56:12.784021   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:12.784047   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:12.784061   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:12.784069   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:12.787917   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:13.283870   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:13.283893   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:13.283901   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:13.283905   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:13.287328   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:13.784620   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:13.784644   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:13.784653   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:13.784657   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:13.787810   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:14.283867   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:14.283892   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:14.283900   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:14.283905   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:14.287541   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:14.784419   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:14.784440   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:14.784447   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:14.784450   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:14.787853   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:14.788359   25306 node_ready.go:53] node "ha-450021-m02" has status "Ready":"False"
	I1014 13:56:15.284687   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:15.284709   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.284720   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.284726   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.287861   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:15.288461   25306 node_ready.go:49] node "ha-450021-m02" has status "Ready":"True"
	I1014 13:56:15.288480   25306 node_ready.go:38] duration metric: took 15.504881835s for node "ha-450021-m02" to be "Ready" ...
	I1014 13:56:15.288487   25306 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 13:56:15.288543   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods
	I1014 13:56:15.288553   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.288559   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.288563   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.292417   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:15.298105   25306 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-btfml" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.298175   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-btfml
	I1014 13:56:15.298182   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.298189   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.298194   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.300838   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:56:15.301679   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:15.301692   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.301699   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.301703   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.304037   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:56:15.304599   25306 pod_ready.go:93] pod "coredns-7c65d6cfc9-btfml" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:15.304614   25306 pod_ready.go:82] duration metric: took 6.489417ms for pod "coredns-7c65d6cfc9-btfml" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.304622   25306 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-h5s6h" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.304661   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-h5s6h
	I1014 13:56:15.304669   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.304683   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.304694   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.306880   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:56:15.307573   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:15.307590   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.307600   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.307610   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.309331   25306 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1014 13:56:15.309944   25306 pod_ready.go:93] pod "coredns-7c65d6cfc9-h5s6h" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:15.309963   25306 pod_ready.go:82] duration metric: took 5.334499ms for pod "coredns-7c65d6cfc9-h5s6h" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.309975   25306 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.310021   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/etcd-ha-450021
	I1014 13:56:15.310032   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.310044   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.310060   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.312281   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:56:15.312954   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:15.312972   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.312984   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.312989   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.314997   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:56:15.315561   25306 pod_ready.go:93] pod "etcd-ha-450021" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:15.315581   25306 pod_ready.go:82] duration metric: took 5.597491ms for pod "etcd-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.315592   25306 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.315648   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/etcd-ha-450021-m02
	I1014 13:56:15.315660   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.315671   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.315680   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.317496   25306 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1014 13:56:15.318188   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:15.318205   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.318217   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.318224   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.320143   25306 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1014 13:56:15.320663   25306 pod_ready.go:93] pod "etcd-ha-450021-m02" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:15.320681   25306 pod_ready.go:82] duration metric: took 5.077444ms for pod "etcd-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.320700   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.485053   25306 request.go:632] Waited for 164.298634ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450021
	I1014 13:56:15.485113   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450021
	I1014 13:56:15.485118   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.485126   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.485130   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.488373   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:15.685383   25306 request.go:632] Waited for 196.403765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:15.685451   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:15.685458   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.685469   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.685478   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.688990   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:15.689603   25306 pod_ready.go:93] pod "kube-apiserver-ha-450021" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:15.689627   25306 pod_ready.go:82] duration metric: took 368.913108ms for pod "kube-apiserver-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.689641   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:15.885558   25306 request.go:632] Waited for 195.846701ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450021-m02
	I1014 13:56:15.885605   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450021-m02
	I1014 13:56:15.885611   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:15.885618   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:15.885623   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:15.889124   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:16.084785   25306 request.go:632] Waited for 194.38123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:16.084840   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:16.084845   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:16.084853   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:16.084857   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:16.088301   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:16.088998   25306 pod_ready.go:93] pod "kube-apiserver-ha-450021-m02" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:16.089015   25306 pod_ready.go:82] duration metric: took 399.36552ms for pod "kube-apiserver-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:16.089025   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:16.285209   25306 request.go:632] Waited for 196.12444ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450021
	I1014 13:56:16.285293   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450021
	I1014 13:56:16.285302   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:16.285313   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:16.285319   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:16.289023   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:16.485127   25306 request.go:632] Waited for 195.353812ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:16.485198   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:16.485212   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:16.485224   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:16.485231   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:16.488483   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:16.489170   25306 pod_ready.go:93] pod "kube-controller-manager-ha-450021" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:16.489190   25306 pod_ready.go:82] duration metric: took 400.158231ms for pod "kube-controller-manager-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:16.489202   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:16.685336   25306 request.go:632] Waited for 196.062822ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450021-m02
	I1014 13:56:16.685418   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450021-m02
	I1014 13:56:16.685429   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:16.685440   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:16.685449   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:16.688757   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:16.884883   25306 request.go:632] Waited for 195.393841ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:16.884933   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:16.884937   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:16.884945   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:16.884950   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:16.888074   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:16.888564   25306 pod_ready.go:93] pod "kube-controller-manager-ha-450021-m02" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:16.888582   25306 pod_ready.go:82] duration metric: took 399.371713ms for pod "kube-controller-manager-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:16.888594   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dmbpv" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:17.084731   25306 request.go:632] Waited for 196.036159ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dmbpv
	I1014 13:56:17.084792   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dmbpv
	I1014 13:56:17.084799   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:17.084811   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:17.084818   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:17.088594   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:17.284774   25306 request.go:632] Waited for 195.293808ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:17.284866   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:17.284878   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:17.284889   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:17.284900   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:17.288050   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:17.288623   25306 pod_ready.go:93] pod "kube-proxy-dmbpv" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:17.288647   25306 pod_ready.go:82] duration metric: took 400.044261ms for pod "kube-proxy-dmbpv" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:17.288659   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-v24tf" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:17.485648   25306 request.go:632] Waited for 196.912408ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v24tf
	I1014 13:56:17.485723   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v24tf
	I1014 13:56:17.485734   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:17.485744   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:17.485752   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:17.488420   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:56:17.685402   25306 request.go:632] Waited for 196.37897ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:17.685455   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:17.685460   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:17.685467   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:17.685471   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:17.689419   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:17.690366   25306 pod_ready.go:93] pod "kube-proxy-v24tf" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:17.690386   25306 pod_ready.go:82] duration metric: took 401.717488ms for pod "kube-proxy-v24tf" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:17.690395   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:17.885498   25306 request.go:632] Waited for 195.043697ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450021
	I1014 13:56:17.885563   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450021
	I1014 13:56:17.885569   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:17.885576   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:17.885581   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:17.888648   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:18.085570   25306 request.go:632] Waited for 196.366356ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:18.085639   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:56:18.085649   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:18.085660   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:18.085668   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:18.088834   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:18.089495   25306 pod_ready.go:93] pod "kube-scheduler-ha-450021" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:18.089519   25306 pod_ready.go:82] duration metric: took 399.116695ms for pod "kube-scheduler-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:18.089532   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:18.285606   25306 request.go:632] Waited for 196.011378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450021-m02
	I1014 13:56:18.285677   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450021-m02
	I1014 13:56:18.285685   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:18.285693   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:18.285699   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:18.288947   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:18.484902   25306 request.go:632] Waited for 195.327209ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:18.484963   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:56:18.484970   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:18.484981   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:18.484989   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:18.488080   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:18.488592   25306 pod_ready.go:93] pod "kube-scheduler-ha-450021-m02" in "kube-system" namespace has status "Ready":"True"
	I1014 13:56:18.488612   25306 pod_ready.go:82] duration metric: took 399.071687ms for pod "kube-scheduler-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:56:18.488628   25306 pod_ready.go:39] duration metric: took 3.200130009s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 13:56:18.488645   25306 api_server.go:52] waiting for apiserver process to appear ...
	I1014 13:56:18.488706   25306 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 13:56:18.504222   25306 api_server.go:72] duration metric: took 19.051768004s to wait for apiserver process to appear ...
	I1014 13:56:18.504252   25306 api_server.go:88] waiting for apiserver healthz status ...
	I1014 13:56:18.504274   25306 api_server.go:253] Checking apiserver healthz at https://192.168.39.176:8443/healthz ...
	I1014 13:56:18.508419   25306 api_server.go:279] https://192.168.39.176:8443/healthz returned 200:
	ok
	I1014 13:56:18.508480   25306 round_trippers.go:463] GET https://192.168.39.176:8443/version
	I1014 13:56:18.508494   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:18.508504   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:18.508511   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:18.509353   25306 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1014 13:56:18.509470   25306 api_server.go:141] control plane version: v1.31.1
	I1014 13:56:18.509489   25306 api_server.go:131] duration metric: took 5.230064ms to wait for apiserver health ...
	I1014 13:56:18.509499   25306 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 13:56:18.684863   25306 request.go:632] Waited for 175.279951ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods
	I1014 13:56:18.684960   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods
	I1014 13:56:18.684974   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:18.684985   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:18.684994   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:18.691157   25306 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 13:56:18.697135   25306 system_pods.go:59] 17 kube-system pods found
	I1014 13:56:18.697234   25306 system_pods.go:61] "coredns-7c65d6cfc9-btfml" [292e08ef-5eec-4ebb-acf5-5b4b03e47724] Running
	I1014 13:56:18.697252   25306 system_pods.go:61] "coredns-7c65d6cfc9-h5s6h" [bf78614c-8f22-48f9-8a16-cfcffecadfcc] Running
	I1014 13:56:18.697264   25306 system_pods.go:61] "etcd-ha-450021" [d3e4a252-6d4a-4617-99f8-416ddaa8e695] Running
	I1014 13:56:18.697271   25306 system_pods.go:61] "etcd-ha-450021-m02" [d890c5b4-c756-42a4-a549-59b46d9fa0f6] Running
	I1014 13:56:18.697279   25306 system_pods.go:61] "kindnet-2ghzc" [f725a811-6a0e-433c-913d-079b7bc4742f] Running
	I1014 13:56:18.697284   25306 system_pods.go:61] "kindnet-c2xkn" [0f821123-80f9-4fe5-b64c-fb641ec185ea] Running
	I1014 13:56:18.697290   25306 system_pods.go:61] "kube-apiserver-ha-450021" [3c355a29-9ac5-466a-974f-22fc58429b98] Running
	I1014 13:56:18.697299   25306 system_pods.go:61] "kube-apiserver-ha-450021-m02" [5e9f016e-2b42-4301-964f-8e2af49d0d08] Running
	I1014 13:56:18.697305   25306 system_pods.go:61] "kube-controller-manager-ha-450021" [b002ddcb-0bb2-44f5-a779-20df99f3cab5] Running
	I1014 13:56:18.697314   25306 system_pods.go:61] "kube-controller-manager-ha-450021-m02" [f7be35b1-380c-4f40-a1d6-5522b961917c] Running
	I1014 13:56:18.697319   25306 system_pods.go:61] "kube-proxy-dmbpv" [e09737a1-c663-4951-b6cb-c0690fbd8153] Running
	I1014 13:56:18.697328   25306 system_pods.go:61] "kube-proxy-v24tf" [49b626fc-4017-45f7-a44f-43f3b311d0e0] Running
	I1014 13:56:18.697334   25306 system_pods.go:61] "kube-scheduler-ha-450021" [2f216272-b604-4f1c-ad4b-fdb874a78cf4] Running
	I1014 13:56:18.697340   25306 system_pods.go:61] "kube-scheduler-ha-450021-m02" [cfa4bb4e-6a32-4b4b-85df-2c7b1a356a4a] Running
	I1014 13:56:18.697345   25306 system_pods.go:61] "kube-vip-ha-450021" [e5340482-7ea5-4299-8096-a2f292c4bfdd] Running
	I1014 13:56:18.697350   25306 system_pods.go:61] "kube-vip-ha-450021-m02" [6a409d8d-9566-4caa-af5a-0dbe7b9f6cec] Running
	I1014 13:56:18.697356   25306 system_pods.go:61] "storage-provisioner" [1377adb3-3faf-4dee-a86e-9c4544a02d51] Running
	I1014 13:56:18.697364   25306 system_pods.go:74] duration metric: took 187.854432ms to wait for pod list to return data ...
	I1014 13:56:18.697375   25306 default_sa.go:34] waiting for default service account to be created ...
	I1014 13:56:18.884741   25306 request.go:632] Waited for 187.279644ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/default/serviceaccounts
	I1014 13:56:18.884797   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/default/serviceaccounts
	I1014 13:56:18.884802   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:18.884809   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:18.884813   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:18.888582   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:56:18.888812   25306 default_sa.go:45] found service account: "default"
	I1014 13:56:18.888830   25306 default_sa.go:55] duration metric: took 191.448571ms for default service account to be created ...
	I1014 13:56:18.888841   25306 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 13:56:19.085294   25306 request.go:632] Waited for 196.363765ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods
	I1014 13:56:19.085358   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods
	I1014 13:56:19.085366   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:19.085377   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:19.085383   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:19.092864   25306 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1014 13:56:19.097323   25306 system_pods.go:86] 17 kube-system pods found
	I1014 13:56:19.097351   25306 system_pods.go:89] "coredns-7c65d6cfc9-btfml" [292e08ef-5eec-4ebb-acf5-5b4b03e47724] Running
	I1014 13:56:19.097357   25306 system_pods.go:89] "coredns-7c65d6cfc9-h5s6h" [bf78614c-8f22-48f9-8a16-cfcffecadfcc] Running
	I1014 13:56:19.097362   25306 system_pods.go:89] "etcd-ha-450021" [d3e4a252-6d4a-4617-99f8-416ddaa8e695] Running
	I1014 13:56:19.097366   25306 system_pods.go:89] "etcd-ha-450021-m02" [d890c5b4-c756-42a4-a549-59b46d9fa0f6] Running
	I1014 13:56:19.097370   25306 system_pods.go:89] "kindnet-2ghzc" [f725a811-6a0e-433c-913d-079b7bc4742f] Running
	I1014 13:56:19.097374   25306 system_pods.go:89] "kindnet-c2xkn" [0f821123-80f9-4fe5-b64c-fb641ec185ea] Running
	I1014 13:56:19.097377   25306 system_pods.go:89] "kube-apiserver-ha-450021" [3c355a29-9ac5-466a-974f-22fc58429b98] Running
	I1014 13:56:19.097382   25306 system_pods.go:89] "kube-apiserver-ha-450021-m02" [5e9f016e-2b42-4301-964f-8e2af49d0d08] Running
	I1014 13:56:19.097387   25306 system_pods.go:89] "kube-controller-manager-ha-450021" [b002ddcb-0bb2-44f5-a779-20df99f3cab5] Running
	I1014 13:56:19.097390   25306 system_pods.go:89] "kube-controller-manager-ha-450021-m02" [f7be35b1-380c-4f40-a1d6-5522b961917c] Running
	I1014 13:56:19.097394   25306 system_pods.go:89] "kube-proxy-dmbpv" [e09737a1-c663-4951-b6cb-c0690fbd8153] Running
	I1014 13:56:19.097398   25306 system_pods.go:89] "kube-proxy-v24tf" [49b626fc-4017-45f7-a44f-43f3b311d0e0] Running
	I1014 13:56:19.097402   25306 system_pods.go:89] "kube-scheduler-ha-450021" [2f216272-b604-4f1c-ad4b-fdb874a78cf4] Running
	I1014 13:56:19.097411   25306 system_pods.go:89] "kube-scheduler-ha-450021-m02" [cfa4bb4e-6a32-4b4b-85df-2c7b1a356a4a] Running
	I1014 13:56:19.097417   25306 system_pods.go:89] "kube-vip-ha-450021" [e5340482-7ea5-4299-8096-a2f292c4bfdd] Running
	I1014 13:56:19.097420   25306 system_pods.go:89] "kube-vip-ha-450021-m02" [6a409d8d-9566-4caa-af5a-0dbe7b9f6cec] Running
	I1014 13:56:19.097423   25306 system_pods.go:89] "storage-provisioner" [1377adb3-3faf-4dee-a86e-9c4544a02d51] Running
	I1014 13:56:19.097429   25306 system_pods.go:126] duration metric: took 208.581366ms to wait for k8s-apps to be running ...
	I1014 13:56:19.097436   25306 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 13:56:19.097477   25306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 13:56:19.112071   25306 system_svc.go:56] duration metric: took 14.628482ms WaitForService to wait for kubelet
	I1014 13:56:19.112097   25306 kubeadm.go:582] duration metric: took 19.659648051s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 13:56:19.112113   25306 node_conditions.go:102] verifying NodePressure condition ...
	I1014 13:56:19.285537   25306 request.go:632] Waited for 173.355083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes
	I1014 13:56:19.285629   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes
	I1014 13:56:19.285637   25306 round_trippers.go:469] Request Headers:
	I1014 13:56:19.285649   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:56:19.285654   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:56:19.289726   25306 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 13:56:19.290673   25306 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 13:56:19.290698   25306 node_conditions.go:123] node cpu capacity is 2
	I1014 13:56:19.290712   25306 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 13:56:19.290717   25306 node_conditions.go:123] node cpu capacity is 2
	I1014 13:56:19.290723   25306 node_conditions.go:105] duration metric: took 178.605419ms to run NodePressure ...
	I1014 13:56:19.290740   25306 start.go:241] waiting for startup goroutines ...
	I1014 13:56:19.290784   25306 start.go:255] writing updated cluster config ...
	I1014 13:56:19.292978   25306 out.go:201] 
	I1014 13:56:19.294410   25306 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:56:19.294496   25306 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/config.json ...
	I1014 13:56:19.296041   25306 out.go:177] * Starting "ha-450021-m03" control-plane node in "ha-450021" cluster
	I1014 13:56:19.297096   25306 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 13:56:19.297116   25306 cache.go:56] Caching tarball of preloaded images
	I1014 13:56:19.297204   25306 preload.go:172] Found /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 13:56:19.297214   25306 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1014 13:56:19.297292   25306 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/config.json ...
	I1014 13:56:19.297485   25306 start.go:360] acquireMachinesLock for ha-450021-m03: {Name:mk0b928e3a7c606d1470b2064477a22c5e59969d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 13:56:19.297521   25306 start.go:364] duration metric: took 20.106µs to acquireMachinesLock for "ha-450021-m03"
	I1014 13:56:19.297537   25306 start.go:93] Provisioning new machine with config: &{Name:ha-450021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 13:56:19.297616   25306 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1014 13:56:19.299122   25306 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 13:56:19.299222   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:56:19.299255   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:56:19.313918   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33835
	I1014 13:56:19.314305   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:56:19.314837   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:56:19.314851   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:56:19.315181   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:56:19.315347   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetMachineName
	I1014 13:56:19.315509   25306 main.go:141] libmachine: (ha-450021-m03) Calling .DriverName
	I1014 13:56:19.315639   25306 start.go:159] libmachine.API.Create for "ha-450021" (driver="kvm2")
	I1014 13:56:19.315670   25306 client.go:168] LocalClient.Create starting
	I1014 13:56:19.315704   25306 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem
	I1014 13:56:19.315748   25306 main.go:141] libmachine: Decoding PEM data...
	I1014 13:56:19.315768   25306 main.go:141] libmachine: Parsing certificate...
	I1014 13:56:19.315834   25306 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem
	I1014 13:56:19.315859   25306 main.go:141] libmachine: Decoding PEM data...
	I1014 13:56:19.315870   25306 main.go:141] libmachine: Parsing certificate...
	I1014 13:56:19.315884   25306 main.go:141] libmachine: Running pre-create checks...
	I1014 13:56:19.315892   25306 main.go:141] libmachine: (ha-450021-m03) Calling .PreCreateCheck
	I1014 13:56:19.316068   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetConfigRaw
	I1014 13:56:19.316425   25306 main.go:141] libmachine: Creating machine...
	I1014 13:56:19.316438   25306 main.go:141] libmachine: (ha-450021-m03) Calling .Create
	I1014 13:56:19.316586   25306 main.go:141] libmachine: (ha-450021-m03) Creating KVM machine...
	I1014 13:56:19.317686   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found existing default KVM network
	I1014 13:56:19.317799   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found existing private KVM network mk-ha-450021
	I1014 13:56:19.317961   25306 main.go:141] libmachine: (ha-450021-m03) Setting up store path in /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03 ...
	I1014 13:56:19.317988   25306 main.go:141] libmachine: (ha-450021-m03) Building disk image from file:///home/jenkins/minikube-integration/19790-7836/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1014 13:56:19.318035   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:19.317950   26053 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 13:56:19.318138   25306 main.go:141] libmachine: (ha-450021-m03) Downloading /home/jenkins/minikube-integration/19790-7836/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19790-7836/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1014 13:56:19.552577   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:19.552461   26053 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/id_rsa...
	I1014 13:56:19.731749   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:19.731620   26053 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/ha-450021-m03.rawdisk...
	I1014 13:56:19.731783   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Writing magic tar header
	I1014 13:56:19.731797   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Writing SSH key tar header
	I1014 13:56:19.731814   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:19.731727   26053 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03 ...
	I1014 13:56:19.731831   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03
	I1014 13:56:19.731859   25306 main.go:141] libmachine: (ha-450021-m03) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03 (perms=drwx------)
	I1014 13:56:19.731873   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube/machines
	I1014 13:56:19.731885   25306 main.go:141] libmachine: (ha-450021-m03) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube/machines (perms=drwxr-xr-x)
	I1014 13:56:19.731899   25306 main.go:141] libmachine: (ha-450021-m03) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube (perms=drwxr-xr-x)
	I1014 13:56:19.731913   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 13:56:19.731942   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836
	I1014 13:56:19.731955   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1014 13:56:19.731964   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Checking permissions on dir: /home/jenkins
	I1014 13:56:19.731973   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Checking permissions on dir: /home
	I1014 13:56:19.731985   25306 main.go:141] libmachine: (ha-450021-m03) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836 (perms=drwxrwxr-x)
	I1014 13:56:19.732001   25306 main.go:141] libmachine: (ha-450021-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1014 13:56:19.732012   25306 main.go:141] libmachine: (ha-450021-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1014 13:56:19.732026   25306 main.go:141] libmachine: (ha-450021-m03) Creating domain...
	I1014 13:56:19.732040   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Skipping /home - not owner
	I1014 13:56:19.732949   25306 main.go:141] libmachine: (ha-450021-m03) define libvirt domain using xml: 
	I1014 13:56:19.732973   25306 main.go:141] libmachine: (ha-450021-m03) <domain type='kvm'>
	I1014 13:56:19.732984   25306 main.go:141] libmachine: (ha-450021-m03)   <name>ha-450021-m03</name>
	I1014 13:56:19.732992   25306 main.go:141] libmachine: (ha-450021-m03)   <memory unit='MiB'>2200</memory>
	I1014 13:56:19.733004   25306 main.go:141] libmachine: (ha-450021-m03)   <vcpu>2</vcpu>
	I1014 13:56:19.733014   25306 main.go:141] libmachine: (ha-450021-m03)   <features>
	I1014 13:56:19.733021   25306 main.go:141] libmachine: (ha-450021-m03)     <acpi/>
	I1014 13:56:19.733031   25306 main.go:141] libmachine: (ha-450021-m03)     <apic/>
	I1014 13:56:19.733038   25306 main.go:141] libmachine: (ha-450021-m03)     <pae/>
	I1014 13:56:19.733044   25306 main.go:141] libmachine: (ha-450021-m03)     
	I1014 13:56:19.733056   25306 main.go:141] libmachine: (ha-450021-m03)   </features>
	I1014 13:56:19.733071   25306 main.go:141] libmachine: (ha-450021-m03)   <cpu mode='host-passthrough'>
	I1014 13:56:19.733081   25306 main.go:141] libmachine: (ha-450021-m03)   
	I1014 13:56:19.733089   25306 main.go:141] libmachine: (ha-450021-m03)   </cpu>
	I1014 13:56:19.733099   25306 main.go:141] libmachine: (ha-450021-m03)   <os>
	I1014 13:56:19.733106   25306 main.go:141] libmachine: (ha-450021-m03)     <type>hvm</type>
	I1014 13:56:19.733117   25306 main.go:141] libmachine: (ha-450021-m03)     <boot dev='cdrom'/>
	I1014 13:56:19.733126   25306 main.go:141] libmachine: (ha-450021-m03)     <boot dev='hd'/>
	I1014 13:56:19.733136   25306 main.go:141] libmachine: (ha-450021-m03)     <bootmenu enable='no'/>
	I1014 13:56:19.733151   25306 main.go:141] libmachine: (ha-450021-m03)   </os>
	I1014 13:56:19.733160   25306 main.go:141] libmachine: (ha-450021-m03)   <devices>
	I1014 13:56:19.733169   25306 main.go:141] libmachine: (ha-450021-m03)     <disk type='file' device='cdrom'>
	I1014 13:56:19.733183   25306 main.go:141] libmachine: (ha-450021-m03)       <source file='/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/boot2docker.iso'/>
	I1014 13:56:19.733196   25306 main.go:141] libmachine: (ha-450021-m03)       <target dev='hdc' bus='scsi'/>
	I1014 13:56:19.733209   25306 main.go:141] libmachine: (ha-450021-m03)       <readonly/>
	I1014 13:56:19.733218   25306 main.go:141] libmachine: (ha-450021-m03)     </disk>
	I1014 13:56:19.733227   25306 main.go:141] libmachine: (ha-450021-m03)     <disk type='file' device='disk'>
	I1014 13:56:19.733239   25306 main.go:141] libmachine: (ha-450021-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1014 13:56:19.733252   25306 main.go:141] libmachine: (ha-450021-m03)       <source file='/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/ha-450021-m03.rawdisk'/>
	I1014 13:56:19.733266   25306 main.go:141] libmachine: (ha-450021-m03)       <target dev='hda' bus='virtio'/>
	I1014 13:56:19.733278   25306 main.go:141] libmachine: (ha-450021-m03)     </disk>
	I1014 13:56:19.733286   25306 main.go:141] libmachine: (ha-450021-m03)     <interface type='network'>
	I1014 13:56:19.733298   25306 main.go:141] libmachine: (ha-450021-m03)       <source network='mk-ha-450021'/>
	I1014 13:56:19.733306   25306 main.go:141] libmachine: (ha-450021-m03)       <model type='virtio'/>
	I1014 13:56:19.733315   25306 main.go:141] libmachine: (ha-450021-m03)     </interface>
	I1014 13:56:19.733325   25306 main.go:141] libmachine: (ha-450021-m03)     <interface type='network'>
	I1014 13:56:19.733356   25306 main.go:141] libmachine: (ha-450021-m03)       <source network='default'/>
	I1014 13:56:19.733373   25306 main.go:141] libmachine: (ha-450021-m03)       <model type='virtio'/>
	I1014 13:56:19.733379   25306 main.go:141] libmachine: (ha-450021-m03)     </interface>
	I1014 13:56:19.733383   25306 main.go:141] libmachine: (ha-450021-m03)     <serial type='pty'>
	I1014 13:56:19.733387   25306 main.go:141] libmachine: (ha-450021-m03)       <target port='0'/>
	I1014 13:56:19.733394   25306 main.go:141] libmachine: (ha-450021-m03)     </serial>
	I1014 13:56:19.733399   25306 main.go:141] libmachine: (ha-450021-m03)     <console type='pty'>
	I1014 13:56:19.733403   25306 main.go:141] libmachine: (ha-450021-m03)       <target type='serial' port='0'/>
	I1014 13:56:19.733410   25306 main.go:141] libmachine: (ha-450021-m03)     </console>
	I1014 13:56:19.733415   25306 main.go:141] libmachine: (ha-450021-m03)     <rng model='virtio'>
	I1014 13:56:19.733430   25306 main.go:141] libmachine: (ha-450021-m03)       <backend model='random'>/dev/random</backend>
	I1014 13:56:19.733436   25306 main.go:141] libmachine: (ha-450021-m03)     </rng>
	I1014 13:56:19.733441   25306 main.go:141] libmachine: (ha-450021-m03)     
	I1014 13:56:19.733445   25306 main.go:141] libmachine: (ha-450021-m03)     
	I1014 13:56:19.733449   25306 main.go:141] libmachine: (ha-450021-m03)   </devices>
	I1014 13:56:19.733455   25306 main.go:141] libmachine: (ha-450021-m03) </domain>
	I1014 13:56:19.733462   25306 main.go:141] libmachine: (ha-450021-m03) 
	I1014 13:56:19.740127   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:3e:d5:3c in network default
	I1014 13:56:19.740688   25306 main.go:141] libmachine: (ha-450021-m03) Ensuring networks are active...
	I1014 13:56:19.740710   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:19.741382   25306 main.go:141] libmachine: (ha-450021-m03) Ensuring network default is active
	I1014 13:56:19.741753   25306 main.go:141] libmachine: (ha-450021-m03) Ensuring network mk-ha-450021 is active
	I1014 13:56:19.742099   25306 main.go:141] libmachine: (ha-450021-m03) Getting domain xml...
	I1014 13:56:19.742834   25306 main.go:141] libmachine: (ha-450021-m03) Creating domain...
	I1014 13:56:21.010084   25306 main.go:141] libmachine: (ha-450021-m03) Waiting to get IP...
	I1014 13:56:21.010944   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:21.011316   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:21.011377   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:21.011315   26053 retry.go:31] will retry after 306.133794ms: waiting for machine to come up
	I1014 13:56:21.318826   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:21.319333   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:21.319361   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:21.319280   26053 retry.go:31] will retry after 366.66223ms: waiting for machine to come up
	I1014 13:56:21.687816   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:21.688312   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:21.688353   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:21.688274   26053 retry.go:31] will retry after 390.93754ms: waiting for machine to come up
	I1014 13:56:22.080797   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:22.081263   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:22.081290   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:22.081223   26053 retry.go:31] will retry after 398.805239ms: waiting for machine to come up
	I1014 13:56:22.481851   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:22.482319   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:22.482343   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:22.482287   26053 retry.go:31] will retry after 640.042779ms: waiting for machine to come up
	I1014 13:56:23.123714   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:23.124086   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:23.124144   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:23.124073   26053 retry.go:31] will retry after 920.9874ms: waiting for machine to come up
	I1014 13:56:24.047070   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:24.047392   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:24.047414   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:24.047351   26053 retry.go:31] will retry after 897.422021ms: waiting for machine to come up
	I1014 13:56:24.946948   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:24.947347   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:24.947372   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:24.947310   26053 retry.go:31] will retry after 1.40276044s: waiting for machine to come up
	I1014 13:56:26.351855   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:26.352313   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:26.352340   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:26.352279   26053 retry.go:31] will retry after 1.726907493s: waiting for machine to come up
	I1014 13:56:28.080396   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:28.080846   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:28.080875   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:28.080790   26053 retry.go:31] will retry after 1.482180268s: waiting for machine to come up
	I1014 13:56:29.564825   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:29.565318   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:29.565340   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:29.565288   26053 retry.go:31] will retry after 2.541525756s: waiting for machine to come up
	I1014 13:56:32.109990   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:32.110440   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:32.110469   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:32.110395   26053 retry.go:31] will retry after 2.914830343s: waiting for machine to come up
	I1014 13:56:35.026789   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:35.027206   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:35.027240   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:35.027152   26053 retry.go:31] will retry after 3.572900713s: waiting for machine to come up
	I1014 13:56:38.603496   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:38.603914   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find current IP address of domain ha-450021-m03 in network mk-ha-450021
	I1014 13:56:38.603943   25306 main.go:141] libmachine: (ha-450021-m03) DBG | I1014 13:56:38.603867   26053 retry.go:31] will retry after 3.566960315s: waiting for machine to come up
	I1014 13:56:42.173796   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:42.174271   25306 main.go:141] libmachine: (ha-450021-m03) Found IP for machine: 192.168.39.55
	I1014 13:56:42.174288   25306 main.go:141] libmachine: (ha-450021-m03) Reserving static IP address...
	I1014 13:56:42.174301   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has current primary IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:42.174679   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find host DHCP lease matching {name: "ha-450021-m03", mac: "52:54:00:af:04:2c", ip: "192.168.39.55"} in network mk-ha-450021
	I1014 13:56:42.249586   25306 main.go:141] libmachine: (ha-450021-m03) Reserved static IP address: 192.168.39.55
	I1014 13:56:42.249623   25306 main.go:141] libmachine: (ha-450021-m03) Waiting for SSH to be available...
	I1014 13:56:42.249632   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Getting to WaitForSSH function...
	I1014 13:56:42.252725   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:42.253185   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021
	I1014 13:56:42.253208   25306 main.go:141] libmachine: (ha-450021-m03) DBG | unable to find defined IP address of network mk-ha-450021 interface with MAC address 52:54:00:af:04:2c
	I1014 13:56:42.253434   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Using SSH client type: external
	I1014 13:56:42.253458   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/id_rsa (-rw-------)
	I1014 13:56:42.253486   25306 main.go:141] libmachine: (ha-450021-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 13:56:42.253504   25306 main.go:141] libmachine: (ha-450021-m03) DBG | About to run SSH command:
	I1014 13:56:42.253518   25306 main.go:141] libmachine: (ha-450021-m03) DBG | exit 0
	I1014 13:56:42.256978   25306 main.go:141] libmachine: (ha-450021-m03) DBG | SSH cmd err, output: exit status 255: 
	I1014 13:56:42.256996   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1014 13:56:42.257003   25306 main.go:141] libmachine: (ha-450021-m03) DBG | command : exit 0
	I1014 13:56:42.257008   25306 main.go:141] libmachine: (ha-450021-m03) DBG | err     : exit status 255
	I1014 13:56:42.257014   25306 main.go:141] libmachine: (ha-450021-m03) DBG | output  : 
	I1014 13:56:45.257522   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Getting to WaitForSSH function...
	I1014 13:56:45.260212   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.260696   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:45.260726   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.260786   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Using SSH client type: external
	I1014 13:56:45.260815   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/id_rsa (-rw-------)
	I1014 13:56:45.260836   25306 main.go:141] libmachine: (ha-450021-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.55 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 13:56:45.260845   25306 main.go:141] libmachine: (ha-450021-m03) DBG | About to run SSH command:
	I1014 13:56:45.260853   25306 main.go:141] libmachine: (ha-450021-m03) DBG | exit 0
	I1014 13:56:45.382585   25306 main.go:141] libmachine: (ha-450021-m03) DBG | SSH cmd err, output: <nil>: 
	I1014 13:56:45.382879   25306 main.go:141] libmachine: (ha-450021-m03) KVM machine creation complete!
	I1014 13:56:45.383199   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetConfigRaw
	I1014 13:56:45.383711   25306 main.go:141] libmachine: (ha-450021-m03) Calling .DriverName
	I1014 13:56:45.383880   25306 main.go:141] libmachine: (ha-450021-m03) Calling .DriverName
	I1014 13:56:45.384004   25306 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1014 13:56:45.384014   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetState
	I1014 13:56:45.385264   25306 main.go:141] libmachine: Detecting operating system of created instance...
	I1014 13:56:45.385276   25306 main.go:141] libmachine: Waiting for SSH to be available...
	I1014 13:56:45.385281   25306 main.go:141] libmachine: Getting to WaitForSSH function...
	I1014 13:56:45.385287   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	I1014 13:56:45.387787   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.388084   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:45.388108   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.388291   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHPort
	I1014 13:56:45.388456   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:45.388593   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:45.388714   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHUsername
	I1014 13:56:45.388830   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:56:45.389029   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I1014 13:56:45.389040   25306 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1014 13:56:45.485735   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 13:56:45.485758   25306 main.go:141] libmachine: Detecting the provisioner...
	I1014 13:56:45.485768   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	I1014 13:56:45.488882   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.489166   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:45.489189   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.489303   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHPort
	I1014 13:56:45.489486   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:45.489610   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:45.489751   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHUsername
	I1014 13:56:45.489875   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:56:45.490046   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I1014 13:56:45.490060   25306 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1014 13:56:45.587324   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1014 13:56:45.587394   25306 main.go:141] libmachine: found compatible host: buildroot
	I1014 13:56:45.587407   25306 main.go:141] libmachine: Provisioning with buildroot...
	I1014 13:56:45.587422   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetMachineName
	I1014 13:56:45.587668   25306 buildroot.go:166] provisioning hostname "ha-450021-m03"
	I1014 13:56:45.587694   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetMachineName
	I1014 13:56:45.587891   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	I1014 13:56:45.589987   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.590329   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:45.590355   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.590484   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHPort
	I1014 13:56:45.590650   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:45.590770   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:45.590887   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHUsername
	I1014 13:56:45.591045   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:56:45.591197   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I1014 13:56:45.591208   25306 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-450021-m03 && echo "ha-450021-m03" | sudo tee /etc/hostname
	I1014 13:56:45.708548   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-450021-m03
	
	I1014 13:56:45.708578   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	I1014 13:56:45.711602   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.711972   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:45.711996   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.712173   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHPort
	I1014 13:56:45.712328   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:45.712490   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:45.712610   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHUsername
	I1014 13:56:45.712744   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:56:45.712915   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I1014 13:56:45.712938   25306 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-450021-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-450021-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-450021-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 13:56:45.819779   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 13:56:45.819813   25306 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 13:56:45.819833   25306 buildroot.go:174] setting up certificates
	I1014 13:56:45.819844   25306 provision.go:84] configureAuth start
	I1014 13:56:45.819857   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetMachineName
	I1014 13:56:45.820154   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetIP
	I1014 13:56:45.823118   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.823460   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:45.823487   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.823678   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	I1014 13:56:45.825593   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.825969   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:45.826000   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.826082   25306 provision.go:143] copyHostCerts
	I1014 13:56:45.826120   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 13:56:45.826162   25306 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 13:56:45.826174   25306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 13:56:45.826256   25306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 13:56:45.826387   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 13:56:45.826414   25306 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 13:56:45.826422   25306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 13:56:45.826470   25306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 13:56:45.826533   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 13:56:45.826559   25306 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 13:56:45.826567   25306 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 13:56:45.826616   25306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 13:56:45.826689   25306 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.ha-450021-m03 san=[127.0.0.1 192.168.39.55 ha-450021-m03 localhost minikube]
	I1014 13:56:45.954899   25306 provision.go:177] copyRemoteCerts
	I1014 13:56:45.954971   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 13:56:45.955000   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	I1014 13:56:45.957506   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.957791   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:45.957818   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:45.957960   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHPort
	I1014 13:56:45.958125   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:45.958305   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHUsername
	I1014 13:56:45.958436   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/id_rsa Username:docker}
	I1014 13:56:46.036842   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 13:56:46.036916   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 13:56:46.062450   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 13:56:46.062515   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1014 13:56:46.086853   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 13:56:46.086926   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 13:56:46.115352   25306 provision.go:87] duration metric: took 295.495227ms to configureAuth
	I1014 13:56:46.115379   25306 buildroot.go:189] setting minikube options for container-runtime
	I1014 13:56:46.115621   25306 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:56:46.115716   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	I1014 13:56:46.118262   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.118631   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:46.118656   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.118842   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHPort
	I1014 13:56:46.119017   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:46.119154   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:46.119286   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHUsername
	I1014 13:56:46.119431   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:56:46.119582   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I1014 13:56:46.119596   25306 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 13:56:46.343295   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 13:56:46.343323   25306 main.go:141] libmachine: Checking connection to Docker...
	I1014 13:56:46.343334   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetURL
	I1014 13:56:46.344763   25306 main.go:141] libmachine: (ha-450021-m03) DBG | Using libvirt version 6000000
	I1014 13:56:46.346964   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.347332   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:46.347354   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.347553   25306 main.go:141] libmachine: Docker is up and running!
	I1014 13:56:46.347568   25306 main.go:141] libmachine: Reticulating splines...
	I1014 13:56:46.347575   25306 client.go:171] duration metric: took 27.031894224s to LocalClient.Create
	I1014 13:56:46.347595   25306 start.go:167] duration metric: took 27.031958272s to libmachine.API.Create "ha-450021"
	I1014 13:56:46.347605   25306 start.go:293] postStartSetup for "ha-450021-m03" (driver="kvm2")
	I1014 13:56:46.347614   25306 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 13:56:46.347629   25306 main.go:141] libmachine: (ha-450021-m03) Calling .DriverName
	I1014 13:56:46.347825   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 13:56:46.347855   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	I1014 13:56:46.350344   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.350734   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:46.350754   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.350907   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHPort
	I1014 13:56:46.351098   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:46.351237   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHUsername
	I1014 13:56:46.351388   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/id_rsa Username:docker}
	I1014 13:56:46.433896   25306 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 13:56:46.438009   25306 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 13:56:46.438030   25306 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 13:56:46.438090   25306 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 13:56:46.438161   25306 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 13:56:46.438171   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> /etc/ssl/certs/150232.pem
	I1014 13:56:46.438246   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 13:56:46.448052   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 13:56:46.472253   25306 start.go:296] duration metric: took 124.635752ms for postStartSetup
	I1014 13:56:46.472307   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetConfigRaw
	I1014 13:56:46.472896   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetIP
	I1014 13:56:46.475688   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.476037   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:46.476063   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.476352   25306 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/config.json ...
	I1014 13:56:46.476544   25306 start.go:128] duration metric: took 27.178917299s to createHost
	I1014 13:56:46.476567   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	I1014 13:56:46.478884   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.479221   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:46.479251   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.479355   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHPort
	I1014 13:56:46.479528   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:46.479638   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:46.479747   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHUsername
	I1014 13:56:46.479874   25306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:56:46.480025   25306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.55 22 <nil> <nil>}
	I1014 13:56:46.480035   25306 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 13:56:46.583399   25306 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728914206.561472302
	
	I1014 13:56:46.583425   25306 fix.go:216] guest clock: 1728914206.561472302
	I1014 13:56:46.583435   25306 fix.go:229] Guest: 2024-10-14 13:56:46.561472302 +0000 UTC Remote: 2024-10-14 13:56:46.476556325 +0000 UTC m=+146.700269378 (delta=84.915977ms)
	I1014 13:56:46.583455   25306 fix.go:200] guest clock delta is within tolerance: 84.915977ms
	I1014 13:56:46.583460   25306 start.go:83] releasing machines lock for "ha-450021-m03", held for 27.285931106s
	I1014 13:56:46.583477   25306 main.go:141] libmachine: (ha-450021-m03) Calling .DriverName
	I1014 13:56:46.583714   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetIP
	I1014 13:56:46.586281   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.586554   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:46.586578   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.589268   25306 out.go:177] * Found network options:
	I1014 13:56:46.590896   25306 out.go:177]   - NO_PROXY=192.168.39.176,192.168.39.89
	W1014 13:56:46.592325   25306 proxy.go:119] fail to check proxy env: Error ip not in block
	W1014 13:56:46.592354   25306 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 13:56:46.592374   25306 main.go:141] libmachine: (ha-450021-m03) Calling .DriverName
	I1014 13:56:46.592957   25306 main.go:141] libmachine: (ha-450021-m03) Calling .DriverName
	I1014 13:56:46.593143   25306 main.go:141] libmachine: (ha-450021-m03) Calling .DriverName
	I1014 13:56:46.593217   25306 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 13:56:46.593262   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	W1014 13:56:46.593451   25306 proxy.go:119] fail to check proxy env: Error ip not in block
	W1014 13:56:46.593472   25306 proxy.go:119] fail to check proxy env: Error ip not in block
	I1014 13:56:46.593517   25306 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 13:56:46.593532   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	I1014 13:56:46.596078   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.596267   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.596474   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:46.596494   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.596667   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHPort
	I1014 13:56:46.596762   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:46.596784   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:46.596836   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:46.596933   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHPort
	I1014 13:56:46.597000   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHUsername
	I1014 13:56:46.597050   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 13:56:46.597134   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/id_rsa Username:docker}
	I1014 13:56:46.597191   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHUsername
	I1014 13:56:46.597299   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/id_rsa Username:docker}
	I1014 13:56:46.829516   25306 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 13:56:46.836362   25306 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 13:56:46.836435   25306 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 13:56:46.855005   25306 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 13:56:46.855034   25306 start.go:495] detecting cgroup driver to use...
	I1014 13:56:46.855101   25306 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 13:56:46.873805   25306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 13:56:46.888317   25306 docker.go:217] disabling cri-docker service (if available) ...
	I1014 13:56:46.888368   25306 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 13:56:46.902770   25306 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 13:56:46.916283   25306 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 13:56:47.031570   25306 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 13:56:47.186900   25306 docker.go:233] disabling docker service ...
	I1014 13:56:47.186971   25306 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 13:56:47.202040   25306 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 13:56:47.215421   25306 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 13:56:47.352807   25306 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 13:56:47.479560   25306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 13:56:47.493558   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 13:56:47.511643   25306 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 13:56:47.511704   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:56:47.521941   25306 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 13:56:47.522055   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:56:47.534488   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:56:47.545529   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:56:47.555346   25306 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 13:56:47.565221   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:56:47.574851   25306 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:56:47.591247   25306 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 13:56:47.601017   25306 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 13:56:47.610150   25306 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 13:56:47.610208   25306 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 13:56:47.623643   25306 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 13:56:47.632860   25306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:56:47.769053   25306 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 13:56:47.859548   25306 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 13:56:47.859617   25306 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 13:56:47.864769   25306 start.go:563] Will wait 60s for crictl version
	I1014 13:56:47.864838   25306 ssh_runner.go:195] Run: which crictl
	I1014 13:56:47.868622   25306 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 13:56:47.912151   25306 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 13:56:47.912224   25306 ssh_runner.go:195] Run: crio --version
	I1014 13:56:47.943678   25306 ssh_runner.go:195] Run: crio --version
	I1014 13:56:47.974464   25306 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1014 13:56:47.975982   25306 out.go:177]   - env NO_PROXY=192.168.39.176
	I1014 13:56:47.977421   25306 out.go:177]   - env NO_PROXY=192.168.39.176,192.168.39.89
	I1014 13:56:47.978761   25306 main.go:141] libmachine: (ha-450021-m03) Calling .GetIP
	I1014 13:56:47.981382   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:47.981851   25306 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 13:56:47.981880   25306 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 13:56:47.982078   25306 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1014 13:56:47.986330   25306 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 13:56:47.999765   25306 mustload.go:65] Loading cluster: ha-450021
	I1014 13:56:47.999983   25306 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:56:48.000276   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:56:48.000314   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:56:48.015013   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38251
	I1014 13:56:48.015440   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:56:48.015880   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:56:48.015898   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:56:48.016248   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:56:48.016426   25306 main.go:141] libmachine: (ha-450021) Calling .GetState
	I1014 13:56:48.017904   25306 host.go:66] Checking if "ha-450021" exists ...
	I1014 13:56:48.018185   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:56:48.018221   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:56:48.032080   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38143
	I1014 13:56:48.032532   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:56:48.033010   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:56:48.033034   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:56:48.033376   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:56:48.033566   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:56:48.033738   25306 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021 for IP: 192.168.39.55
	I1014 13:56:48.033750   25306 certs.go:194] generating shared ca certs ...
	I1014 13:56:48.033771   25306 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:56:48.033910   25306 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 13:56:48.033951   25306 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 13:56:48.033962   25306 certs.go:256] generating profile certs ...
	I1014 13:56:48.034054   25306 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.key
	I1014 13:56:48.034099   25306 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.b8fc6ee2
	I1014 13:56:48.034119   25306 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.b8fc6ee2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.176 192.168.39.89 192.168.39.55 192.168.39.254]
	I1014 13:56:48.250009   25306 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.b8fc6ee2 ...
	I1014 13:56:48.250065   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.b8fc6ee2: {Name:mk915feb36aa4db7e40387e7070135b42d923437 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:56:48.250246   25306 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.b8fc6ee2 ...
	I1014 13:56:48.250260   25306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.b8fc6ee2: {Name:mk5df80a68a940fb5e6799020daa8453d1ca5d79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:56:48.250346   25306 certs.go:381] copying /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.b8fc6ee2 -> /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt
	I1014 13:56:48.250480   25306 certs.go:385] copying /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.b8fc6ee2 -> /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key
	I1014 13:56:48.250647   25306 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key
	I1014 13:56:48.250665   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 13:56:48.250682   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 13:56:48.250698   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 13:56:48.250714   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 13:56:48.250729   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 13:56:48.250744   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 13:56:48.250759   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 13:56:48.282713   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 13:56:48.282807   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 13:56:48.282843   25306 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 13:56:48.282853   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 13:56:48.282876   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 13:56:48.282899   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 13:56:48.282919   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 13:56:48.282958   25306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 13:56:48.282987   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem -> /usr/share/ca-certificates/15023.pem
	I1014 13:56:48.283001   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> /usr/share/ca-certificates/150232.pem
	I1014 13:56:48.283013   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:56:48.283046   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:56:48.285808   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:56:48.286249   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:56:48.286279   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:56:48.286442   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:56:48.286648   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:56:48.286791   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:56:48.286909   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 13:56:48.366887   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1014 13:56:48.372822   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1014 13:56:48.386233   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1014 13:56:48.391254   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I1014 13:56:48.402846   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1014 13:56:48.407460   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1014 13:56:48.418138   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1014 13:56:48.423366   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1014 13:56:48.435286   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1014 13:56:48.442980   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1014 13:56:48.457010   25306 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1014 13:56:48.462031   25306 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I1014 13:56:48.475327   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 13:56:48.499553   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 13:56:48.526670   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 13:56:48.552105   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 13:56:48.577419   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1014 13:56:48.600650   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 13:56:48.623847   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 13:56:48.649170   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 13:56:48.674110   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 13:56:48.700598   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 13:56:48.725176   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 13:56:48.750067   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1014 13:56:48.767549   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I1014 13:56:48.786866   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1014 13:56:48.804737   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1014 13:56:48.822022   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1014 13:56:48.840501   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I1014 13:56:48.858556   25306 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1014 13:56:48.875294   25306 ssh_runner.go:195] Run: openssl version
	I1014 13:56:48.880974   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 13:56:48.892080   25306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 13:56:48.896904   25306 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 13:56:48.896954   25306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 13:56:48.902856   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 13:56:48.914212   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 13:56:48.926784   25306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 13:56:48.931725   25306 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 13:56:48.931780   25306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 13:56:48.937633   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 13:56:48.949727   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 13:56:48.960604   25306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:56:48.965337   25306 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:56:48.965398   25306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:56:48.970965   25306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 13:56:48.983521   25306 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 13:56:48.987988   25306 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 13:56:48.988067   25306 kubeadm.go:934] updating node {m03 192.168.39.55 8443 v1.31.1 crio true true} ...
	I1014 13:56:48.988197   25306 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-450021-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.55
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 13:56:48.988224   25306 kube-vip.go:115] generating kube-vip config ...
	I1014 13:56:48.988260   25306 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1014 13:56:49.006786   25306 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1014 13:56:49.006878   25306 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1014 13:56:49.006948   25306 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 13:56:49.017177   25306 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.1': No such file or directory
	
	Initiating transfer...
	I1014 13:56:49.017231   25306 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.1
	I1014 13:56:49.027546   25306 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubeadm.sha256
	I1014 13:56:49.027571   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubeadm -> /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1014 13:56:49.027572   25306 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
	I1014 13:56:49.027592   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubectl -> /var/lib/minikube/binaries/v1.31.1/kubectl
	I1014 13:56:49.027633   25306 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm
	I1014 13:56:49.027546   25306 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubelet.sha256
	I1014 13:56:49.027650   25306 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl
	I1014 13:56:49.027677   25306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 13:56:49.041850   25306 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubectl': No such file or directory
	I1014 13:56:49.041880   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubectl --> /var/lib/minikube/binaries/v1.31.1/kubectl (56381592 bytes)
	I1014 13:56:49.059453   25306 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubeadm': No such file or directory
	I1014 13:56:49.059469   25306 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubelet -> /var/lib/minikube/binaries/v1.31.1/kubelet
	I1014 13:56:49.059486   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubeadm --> /var/lib/minikube/binaries/v1.31.1/kubeadm (58290328 bytes)
	I1014 13:56:49.059574   25306 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet
	I1014 13:56:49.108836   25306 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.1/kubelet': No such file or directory
	I1014 13:56:49.108879   25306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/linux/amd64/v1.31.1/kubelet --> /var/lib/minikube/binaries/v1.31.1/kubelet (76869944 bytes)
	I1014 13:56:49.922146   25306 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1014 13:56:49.934057   25306 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1014 13:56:49.951495   25306 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 13:56:49.969831   25306 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1014 13:56:49.987375   25306 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1014 13:56:49.991392   25306 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 13:56:50.004437   25306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:56:50.138457   25306 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 13:56:50.156141   25306 host.go:66] Checking if "ha-450021" exists ...
	I1014 13:56:50.156664   25306 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:56:50.156719   25306 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:56:50.172505   25306 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34963
	I1014 13:56:50.172984   25306 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:56:50.173395   25306 main.go:141] libmachine: Using API Version  1
	I1014 13:56:50.173421   25306 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:56:50.173801   25306 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:56:50.173992   25306 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 13:56:50.174119   25306 start.go:317] joinCluster: &{Name:ha-450021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cluster
Name:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 13:56:50.174253   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1014 13:56:50.174270   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 13:56:50.177090   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:56:50.177620   25306 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 13:56:50.177652   25306 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 13:56:50.177788   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 13:56:50.177965   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 13:56:50.178111   25306 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 13:56:50.178264   25306 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 13:56:50.344835   25306 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 13:56:50.344884   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zud3yn.6rxrec6p5rmcwb5b --discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-450021-m03 --control-plane --apiserver-advertise-address=192.168.39.55 --apiserver-bind-port=8443"
	I1014 13:57:13.924825   25306 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zud3yn.6rxrec6p5rmcwb5b --discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-450021-m03 --control-plane --apiserver-advertise-address=192.168.39.55 --apiserver-bind-port=8443": (23.579918283s)
	I1014 13:57:13.924874   25306 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1014 13:57:14.548857   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-450021-m03 minikube.k8s.io/updated_at=2024_10_14T13_57_14_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=ha-450021 minikube.k8s.io/primary=false
	I1014 13:57:14.695478   25306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-450021-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1014 13:57:14.877781   25306 start.go:319] duration metric: took 24.703657095s to joinCluster
	I1014 13:57:14.877880   25306 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 13:57:14.878165   25306 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:57:14.879747   25306 out.go:177] * Verifying Kubernetes components...
	I1014 13:57:14.881030   25306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:57:15.185770   25306 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 13:57:15.218461   25306 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 13:57:15.218911   25306 kapi.go:59] client config for ha-450021: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.crt", KeyFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.key", CAFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432aa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1014 13:57:15.218986   25306 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.176:8443
	I1014 13:57:15.219237   25306 node_ready.go:35] waiting up to 6m0s for node "ha-450021-m03" to be "Ready" ...
	I1014 13:57:15.219350   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:15.219360   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:15.219373   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:15.219378   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:15.231145   25306 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1014 13:57:15.719481   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:15.719504   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:15.719515   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:15.719523   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:15.723133   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:16.219449   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:16.219474   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:16.219486   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:16.219493   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:16.222753   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:16.719775   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:16.719794   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:16.719801   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:16.719805   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:16.723148   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:17.220337   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:17.220374   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:17.220382   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:17.220385   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:17.223796   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:17.224523   25306 node_ready.go:53] node "ha-450021-m03" has status "Ready":"False"
	I1014 13:57:17.719785   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:17.719812   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:17.719823   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:17.719828   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:17.724599   25306 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 13:57:18.219479   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:18.219497   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:18.219505   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:18.219510   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:18.222903   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:18.719939   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:18.719958   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:18.719964   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:18.719968   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:18.722786   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:57:19.220210   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:19.220235   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:19.220246   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:19.220251   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:19.223890   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:19.719936   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:19.719957   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:19.719965   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:19.719968   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:19.725873   25306 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 13:57:19.726613   25306 node_ready.go:53] node "ha-450021-m03" has status "Ready":"False"
	I1014 13:57:20.219399   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:20.219418   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:20.219426   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:20.219429   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:20.222447   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:20.720283   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:20.720304   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:20.720311   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:20.720316   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:20.723293   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:57:21.219622   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:21.219643   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:21.219651   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:21.219655   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:21.223137   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:21.719413   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:21.719434   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:21.719441   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:21.719445   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:21.727130   25306 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1014 13:57:21.728875   25306 node_ready.go:53] node "ha-450021-m03" has status "Ready":"False"
	I1014 13:57:22.219563   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:22.219584   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:22.219593   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:22.219597   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:22.222980   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:22.719873   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:22.719897   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:22.719906   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:22.719910   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:22.723538   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:23.219424   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:23.219447   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:23.219456   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:23.219459   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:23.223288   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:23.719840   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:23.719863   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:23.719870   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:23.719874   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:23.725306   25306 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 13:57:24.220401   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:24.220427   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:24.220439   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:24.220448   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:24.224025   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:24.224423   25306 node_ready.go:53] node "ha-450021-m03" has status "Ready":"False"
	I1014 13:57:24.720285   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:24.720311   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:24.720323   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:24.720331   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:24.724123   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:25.219820   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:25.219841   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:25.219849   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:25.219852   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:25.223237   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:25.720061   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:25.720081   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:25.720090   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:25.720095   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:25.727909   25306 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1014 13:57:26.220029   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:26.220052   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:26.220060   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:26.220065   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:26.223671   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:26.719549   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:26.719569   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:26.719577   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:26.719581   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:26.724063   25306 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 13:57:26.724628   25306 node_ready.go:53] node "ha-450021-m03" has status "Ready":"False"
	I1014 13:57:27.220196   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:27.220218   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:27.220230   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:27.220239   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:27.227906   25306 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1014 13:57:27.719535   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:27.719576   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:27.719587   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:27.719592   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:27.727292   25306 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1014 13:57:28.219952   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:28.219973   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:28.219983   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:28.219988   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:28.223688   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:28.719432   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:28.719455   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:28.719463   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:28.719468   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:28.722896   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:29.219877   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:29.219901   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.219911   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.219915   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.223129   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:29.223965   25306 node_ready.go:49] node "ha-450021-m03" has status "Ready":"True"
	I1014 13:57:29.223987   25306 node_ready.go:38] duration metric: took 14.004731761s for node "ha-450021-m03" to be "Ready" ...
	I1014 13:57:29.223998   25306 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 13:57:29.224060   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods
	I1014 13:57:29.224068   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.224075   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.224081   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.230054   25306 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1014 13:57:29.238333   25306 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-btfml" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.238422   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-btfml
	I1014 13:57:29.238435   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.238446   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.238455   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.242284   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:29.243174   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:29.243194   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.243204   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.243210   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.245933   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:57:29.246411   25306 pod_ready.go:93] pod "coredns-7c65d6cfc9-btfml" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:29.246431   25306 pod_ready.go:82] duration metric: took 8.073653ms for pod "coredns-7c65d6cfc9-btfml" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.246440   25306 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-h5s6h" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.246494   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-h5s6h
	I1014 13:57:29.246505   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.246515   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.246521   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.248883   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:57:29.249550   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:29.249563   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.249569   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.249573   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.251738   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:57:29.252240   25306 pod_ready.go:93] pod "coredns-7c65d6cfc9-h5s6h" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:29.252260   25306 pod_ready.go:82] duration metric: took 5.813932ms for pod "coredns-7c65d6cfc9-h5s6h" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.252268   25306 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.252312   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/etcd-ha-450021
	I1014 13:57:29.252319   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.252326   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.252330   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.254629   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:57:29.255222   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:29.255236   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.255243   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.255248   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.257432   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:57:29.257842   25306 pod_ready.go:93] pod "etcd-ha-450021" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:29.257858   25306 pod_ready.go:82] duration metric: took 5.5841ms for pod "etcd-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.257865   25306 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.257906   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/etcd-ha-450021-m02
	I1014 13:57:29.257913   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.257920   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.257926   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.260016   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:57:29.260730   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:57:29.260748   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.260759   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.260766   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.262822   25306 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1014 13:57:29.263416   25306 pod_ready.go:93] pod "etcd-ha-450021-m02" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:29.263434   25306 pod_ready.go:82] duration metric: took 5.562613ms for pod "etcd-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.263445   25306 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-450021-m03" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.420814   25306 request.go:632] Waited for 157.302029ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/etcd-ha-450021-m03
	I1014 13:57:29.420888   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/etcd-ha-450021-m03
	I1014 13:57:29.420896   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.420904   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.420911   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.423933   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:29.620244   25306 request.go:632] Waited for 195.721406ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:29.620303   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:29.620309   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.620331   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.620359   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.623721   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:29.624232   25306 pod_ready.go:93] pod "etcd-ha-450021-m03" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:29.624248   25306 pod_ready.go:82] duration metric: took 360.793531ms for pod "etcd-ha-450021-m03" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.624265   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:29.820803   25306 request.go:632] Waited for 196.4673ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450021
	I1014 13:57:29.820871   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450021
	I1014 13:57:29.820878   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:29.820888   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:29.820899   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:29.825055   25306 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 13:57:30.020658   25306 request.go:632] Waited for 194.868544ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:30.020728   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:30.020733   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:30.020740   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:30.020744   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:30.024136   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:30.024766   25306 pod_ready.go:93] pod "kube-apiserver-ha-450021" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:30.024782   25306 pod_ready.go:82] duration metric: took 400.510119ms for pod "kube-apiserver-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:30.024791   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:30.220429   25306 request.go:632] Waited for 195.542568ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450021-m02
	I1014 13:57:30.220491   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450021-m02
	I1014 13:57:30.220497   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:30.220508   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:30.220517   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:30.224059   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:30.420172   25306 request.go:632] Waited for 195.340177ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:57:30.420225   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:57:30.420231   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:30.420238   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:30.420243   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:30.423967   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:30.424613   25306 pod_ready.go:93] pod "kube-apiserver-ha-450021-m02" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:30.424631   25306 pod_ready.go:82] duration metric: took 399.833776ms for pod "kube-apiserver-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:30.424640   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-450021-m03" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:30.620846   25306 request.go:632] Waited for 196.141352ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450021-m03
	I1014 13:57:30.620922   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-450021-m03
	I1014 13:57:30.620928   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:30.620935   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:30.620942   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:30.624496   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:30.820849   25306 request.go:632] Waited for 195.396807ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:30.820939   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:30.820975   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:30.820988   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:30.820995   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:30.824502   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:30.825021   25306 pod_ready.go:93] pod "kube-apiserver-ha-450021-m03" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:30.825046   25306 pod_ready.go:82] duration metric: took 400.398723ms for pod "kube-apiserver-ha-450021-m03" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:30.825059   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:31.020285   25306 request.go:632] Waited for 195.157008ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450021
	I1014 13:57:31.020365   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450021
	I1014 13:57:31.020370   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:31.020385   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:31.020393   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:31.024268   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:31.220585   25306 request.go:632] Waited for 195.341359ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:31.220643   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:31.220650   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:31.220659   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:31.220664   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:31.224268   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:31.224942   25306 pod_ready.go:93] pod "kube-controller-manager-ha-450021" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:31.224972   25306 pod_ready.go:82] duration metric: took 399.90441ms for pod "kube-controller-manager-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:31.224991   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:31.419861   25306 request.go:632] Waited for 194.791136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450021-m02
	I1014 13:57:31.419920   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450021-m02
	I1014 13:57:31.419926   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:31.419934   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:31.419939   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:31.423671   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:31.620170   25306 request.go:632] Waited for 195.363598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:57:31.620257   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:57:31.620267   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:31.620279   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:31.620289   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:31.623838   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:31.624806   25306 pod_ready.go:93] pod "kube-controller-manager-ha-450021-m02" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:31.624830   25306 pod_ready.go:82] duration metric: took 399.825307ms for pod "kube-controller-manager-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:31.624845   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-450021-m03" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:31.819925   25306 request.go:632] Waited for 194.986166ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450021-m03
	I1014 13:57:31.819986   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-450021-m03
	I1014 13:57:31.819995   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:31.820007   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:31.820020   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:31.823660   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:32.020870   25306 request.go:632] Waited for 196.217554ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:32.020953   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:32.020964   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:32.020976   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:32.020984   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:32.024484   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:32.025120   25306 pod_ready.go:93] pod "kube-controller-manager-ha-450021-m03" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:32.025154   25306 pod_ready.go:82] duration metric: took 400.297134ms for pod "kube-controller-manager-ha-450021-m03" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:32.025174   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9tbfp" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:32.220154   25306 request.go:632] Waited for 194.89867ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9tbfp
	I1014 13:57:32.220222   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9tbfp
	I1014 13:57:32.220229   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:32.220239   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:32.220246   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:32.223571   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:32.420701   25306 request.go:632] Waited for 196.352524ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:32.420758   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:32.420763   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:32.420770   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:32.420774   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:32.424213   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:32.424900   25306 pod_ready.go:93] pod "kube-proxy-9tbfp" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:32.424923   25306 pod_ready.go:82] duration metric: took 399.74019ms for pod "kube-proxy-9tbfp" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:32.424936   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dmbpv" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:32.619849   25306 request.go:632] Waited for 194.848954ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dmbpv
	I1014 13:57:32.619902   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dmbpv
	I1014 13:57:32.619908   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:32.619915   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:32.619918   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:32.623593   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:32.820780   25306 request.go:632] Waited for 196.366155ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:32.820849   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:32.820854   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:32.820863   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:32.820870   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:32.824510   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:32.825180   25306 pod_ready.go:93] pod "kube-proxy-dmbpv" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:32.825196   25306 pod_ready.go:82] duration metric: took 400.2529ms for pod "kube-proxy-dmbpv" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:32.825205   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-v24tf" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:33.020309   25306 request.go:632] Waited for 195.030338ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v24tf
	I1014 13:57:33.020398   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v24tf
	I1014 13:57:33.020409   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:33.020421   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:33.020429   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:33.023944   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:33.220873   25306 request.go:632] Waited for 196.168894ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:57:33.220972   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:57:33.220984   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:33.221002   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:33.221010   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:33.224398   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:33.225139   25306 pod_ready.go:93] pod "kube-proxy-v24tf" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:33.225161   25306 pod_ready.go:82] duration metric: took 399.9482ms for pod "kube-proxy-v24tf" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:33.225174   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:33.420278   25306 request.go:632] Waited for 195.028059ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450021
	I1014 13:57:33.420352   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450021
	I1014 13:57:33.420358   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:33.420365   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:33.420370   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:33.423970   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:33.619940   25306 request.go:632] Waited for 195.292135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:33.620017   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021
	I1014 13:57:33.620024   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:33.620031   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:33.620038   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:33.623628   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:33.624429   25306 pod_ready.go:93] pod "kube-scheduler-ha-450021" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:33.624446   25306 pod_ready.go:82] duration metric: took 399.265054ms for pod "kube-scheduler-ha-450021" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:33.624456   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:33.820766   25306 request.go:632] Waited for 196.250065ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450021-m02
	I1014 13:57:33.820834   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450021-m02
	I1014 13:57:33.820840   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:33.820847   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:33.820861   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:33.824813   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:34.020844   25306 request.go:632] Waited for 195.391993ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:57:34.020901   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m02
	I1014 13:57:34.020908   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:34.020915   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:34.020920   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:34.025139   25306 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 13:57:34.026105   25306 pod_ready.go:93] pod "kube-scheduler-ha-450021-m02" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:34.026127   25306 pod_ready.go:82] duration metric: took 401.663759ms for pod "kube-scheduler-ha-450021-m02" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:34.026140   25306 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-450021-m03" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:34.220315   25306 request.go:632] Waited for 194.095801ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450021-m03
	I1014 13:57:34.220368   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-450021-m03
	I1014 13:57:34.220374   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:34.220381   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:34.220385   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:34.224012   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:34.420204   25306 request.go:632] Waited for 195.373756ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:34.420275   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes/ha-450021-m03
	I1014 13:57:34.420280   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:34.420288   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:34.420292   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:34.424022   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:34.424779   25306 pod_ready.go:93] pod "kube-scheduler-ha-450021-m03" in "kube-system" namespace has status "Ready":"True"
	I1014 13:57:34.424801   25306 pod_ready.go:82] duration metric: took 398.654013ms for pod "kube-scheduler-ha-450021-m03" in "kube-system" namespace to be "Ready" ...
	I1014 13:57:34.424816   25306 pod_ready.go:39] duration metric: took 5.200801864s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 13:57:34.424833   25306 api_server.go:52] waiting for apiserver process to appear ...
	I1014 13:57:34.424888   25306 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 13:57:34.443450   25306 api_server.go:72] duration metric: took 19.56551851s to wait for apiserver process to appear ...
	I1014 13:57:34.443480   25306 api_server.go:88] waiting for apiserver healthz status ...
	I1014 13:57:34.443507   25306 api_server.go:253] Checking apiserver healthz at https://192.168.39.176:8443/healthz ...
	I1014 13:57:34.447984   25306 api_server.go:279] https://192.168.39.176:8443/healthz returned 200:
	ok
	I1014 13:57:34.448076   25306 round_trippers.go:463] GET https://192.168.39.176:8443/version
	I1014 13:57:34.448089   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:34.448100   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:34.448108   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:34.449007   25306 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1014 13:57:34.449084   25306 api_server.go:141] control plane version: v1.31.1
	I1014 13:57:34.449104   25306 api_server.go:131] duration metric: took 5.616812ms to wait for apiserver health ...
	I1014 13:57:34.449115   25306 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 13:57:34.620303   25306 request.go:632] Waited for 171.103547ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods
	I1014 13:57:34.620363   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods
	I1014 13:57:34.620370   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:34.620380   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:34.620385   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:34.626531   25306 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 13:57:34.632849   25306 system_pods.go:59] 24 kube-system pods found
	I1014 13:57:34.632878   25306 system_pods.go:61] "coredns-7c65d6cfc9-btfml" [292e08ef-5eec-4ebb-acf5-5b4b03e47724] Running
	I1014 13:57:34.632883   25306 system_pods.go:61] "coredns-7c65d6cfc9-h5s6h" [bf78614c-8f22-48f9-8a16-cfcffecadfcc] Running
	I1014 13:57:34.632887   25306 system_pods.go:61] "etcd-ha-450021" [d3e4a252-6d4a-4617-99f8-416ddaa8e695] Running
	I1014 13:57:34.632891   25306 system_pods.go:61] "etcd-ha-450021-m02" [d890c5b4-c756-42a4-a549-59b46d9fa0f6] Running
	I1014 13:57:34.632894   25306 system_pods.go:61] "etcd-ha-450021-m03" [ceded083-0662-41fd-9317-3f7debf0252b] Running
	I1014 13:57:34.632897   25306 system_pods.go:61] "kindnet-2ghzc" [f725a811-6a0e-433c-913d-079b7bc4742f] Running
	I1014 13:57:34.632900   25306 system_pods.go:61] "kindnet-7jwgx" [c4607bd9-32b8-401b-a74e-b20d6f63ce03] Running
	I1014 13:57:34.632903   25306 system_pods.go:61] "kindnet-c2xkn" [0f821123-80f9-4fe5-b64c-fb641ec185ea] Running
	I1014 13:57:34.632906   25306 system_pods.go:61] "kube-apiserver-ha-450021" [3c355a29-9ac5-466a-974f-22fc58429b98] Running
	I1014 13:57:34.632909   25306 system_pods.go:61] "kube-apiserver-ha-450021-m02" [5e9f016e-2b42-4301-964f-8e2af49d0d08] Running
	I1014 13:57:34.632911   25306 system_pods.go:61] "kube-apiserver-ha-450021-m03" [3521d4f5-b657-4f3c-a36e-a855d81590e9] Running
	I1014 13:57:34.632915   25306 system_pods.go:61] "kube-controller-manager-ha-450021" [b002ddcb-0bb2-44f5-a779-20df99f3cab5] Running
	I1014 13:57:34.632917   25306 system_pods.go:61] "kube-controller-manager-ha-450021-m02" [f7be35b1-380c-4f40-a1d6-5522b961917c] Running
	I1014 13:57:34.632920   25306 system_pods.go:61] "kube-controller-manager-ha-450021-m03" [56960cdf-61e7-4251-8fa5-7034b7aeffcd] Running
	I1014 13:57:34.632923   25306 system_pods.go:61] "kube-proxy-9tbfp" [fc30758d-16af-4818-9414-e78ee865fb7d] Running
	I1014 13:57:34.632926   25306 system_pods.go:61] "kube-proxy-dmbpv" [e09737a1-c663-4951-b6cb-c0690fbd8153] Running
	I1014 13:57:34.632929   25306 system_pods.go:61] "kube-proxy-v24tf" [49b626fc-4017-45f7-a44f-43f3b311d0e0] Running
	I1014 13:57:34.632931   25306 system_pods.go:61] "kube-scheduler-ha-450021" [2f216272-b604-4f1c-ad4b-fdb874a78cf4] Running
	I1014 13:57:34.632934   25306 system_pods.go:61] "kube-scheduler-ha-450021-m02" [cfa4bb4e-6a32-4b4b-85df-2c7b1a356a4a] Running
	I1014 13:57:34.632937   25306 system_pods.go:61] "kube-scheduler-ha-450021-m03" [11cfe784-95d9-48fb-ab0c-334d4136c207] Running
	I1014 13:57:34.632940   25306 system_pods.go:61] "kube-vip-ha-450021" [e5340482-7ea5-4299-8096-a2f292c4bfdd] Running
	I1014 13:57:34.632942   25306 system_pods.go:61] "kube-vip-ha-450021-m02" [6a409d8d-9566-4caa-af5a-0dbe7b9f6cec] Running
	I1014 13:57:34.632946   25306 system_pods.go:61] "kube-vip-ha-450021-m03" [de6e64e3-5d83-4ca7-8618-279cca6bf0c1] Running
	I1014 13:57:34.632948   25306 system_pods.go:61] "storage-provisioner" [1377adb3-3faf-4dee-a86e-9c4544a02d51] Running
	I1014 13:57:34.632953   25306 system_pods.go:74] duration metric: took 183.830824ms to wait for pod list to return data ...
	I1014 13:57:34.632963   25306 default_sa.go:34] waiting for default service account to be created ...
	I1014 13:57:34.820472   25306 request.go:632] Waited for 187.441614ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/default/serviceaccounts
	I1014 13:57:34.820540   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/default/serviceaccounts
	I1014 13:57:34.820546   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:34.820553   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:34.820563   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:34.824880   25306 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1014 13:57:34.824982   25306 default_sa.go:45] found service account: "default"
	I1014 13:57:34.824994   25306 default_sa.go:55] duration metric: took 192.026288ms for default service account to be created ...
	I1014 13:57:34.825002   25306 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 13:57:35.020105   25306 request.go:632] Waited for 195.031126ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods
	I1014 13:57:35.020178   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/namespaces/kube-system/pods
	I1014 13:57:35.020187   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:35.020199   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:35.020209   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:35.026365   25306 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1014 13:57:35.032685   25306 system_pods.go:86] 24 kube-system pods found
	I1014 13:57:35.032713   25306 system_pods.go:89] "coredns-7c65d6cfc9-btfml" [292e08ef-5eec-4ebb-acf5-5b4b03e47724] Running
	I1014 13:57:35.032719   25306 system_pods.go:89] "coredns-7c65d6cfc9-h5s6h" [bf78614c-8f22-48f9-8a16-cfcffecadfcc] Running
	I1014 13:57:35.032722   25306 system_pods.go:89] "etcd-ha-450021" [d3e4a252-6d4a-4617-99f8-416ddaa8e695] Running
	I1014 13:57:35.032727   25306 system_pods.go:89] "etcd-ha-450021-m02" [d890c5b4-c756-42a4-a549-59b46d9fa0f6] Running
	I1014 13:57:35.032731   25306 system_pods.go:89] "etcd-ha-450021-m03" [ceded083-0662-41fd-9317-3f7debf0252b] Running
	I1014 13:57:35.032736   25306 system_pods.go:89] "kindnet-2ghzc" [f725a811-6a0e-433c-913d-079b7bc4742f] Running
	I1014 13:57:35.032739   25306 system_pods.go:89] "kindnet-7jwgx" [c4607bd9-32b8-401b-a74e-b20d6f63ce03] Running
	I1014 13:57:35.032743   25306 system_pods.go:89] "kindnet-c2xkn" [0f821123-80f9-4fe5-b64c-fb641ec185ea] Running
	I1014 13:57:35.032747   25306 system_pods.go:89] "kube-apiserver-ha-450021" [3c355a29-9ac5-466a-974f-22fc58429b98] Running
	I1014 13:57:35.032751   25306 system_pods.go:89] "kube-apiserver-ha-450021-m02" [5e9f016e-2b42-4301-964f-8e2af49d0d08] Running
	I1014 13:57:35.032754   25306 system_pods.go:89] "kube-apiserver-ha-450021-m03" [3521d4f5-b657-4f3c-a36e-a855d81590e9] Running
	I1014 13:57:35.032758   25306 system_pods.go:89] "kube-controller-manager-ha-450021" [b002ddcb-0bb2-44f5-a779-20df99f3cab5] Running
	I1014 13:57:35.032763   25306 system_pods.go:89] "kube-controller-manager-ha-450021-m02" [f7be35b1-380c-4f40-a1d6-5522b961917c] Running
	I1014 13:57:35.032770   25306 system_pods.go:89] "kube-controller-manager-ha-450021-m03" [56960cdf-61e7-4251-8fa5-7034b7aeffcd] Running
	I1014 13:57:35.032774   25306 system_pods.go:89] "kube-proxy-9tbfp" [fc30758d-16af-4818-9414-e78ee865fb7d] Running
	I1014 13:57:35.032780   25306 system_pods.go:89] "kube-proxy-dmbpv" [e09737a1-c663-4951-b6cb-c0690fbd8153] Running
	I1014 13:57:35.032783   25306 system_pods.go:89] "kube-proxy-v24tf" [49b626fc-4017-45f7-a44f-43f3b311d0e0] Running
	I1014 13:57:35.032789   25306 system_pods.go:89] "kube-scheduler-ha-450021" [2f216272-b604-4f1c-ad4b-fdb874a78cf4] Running
	I1014 13:57:35.032793   25306 system_pods.go:89] "kube-scheduler-ha-450021-m02" [cfa4bb4e-6a32-4b4b-85df-2c7b1a356a4a] Running
	I1014 13:57:35.032799   25306 system_pods.go:89] "kube-scheduler-ha-450021-m03" [11cfe784-95d9-48fb-ab0c-334d4136c207] Running
	I1014 13:57:35.032803   25306 system_pods.go:89] "kube-vip-ha-450021" [e5340482-7ea5-4299-8096-a2f292c4bfdd] Running
	I1014 13:57:35.032808   25306 system_pods.go:89] "kube-vip-ha-450021-m02" [6a409d8d-9566-4caa-af5a-0dbe7b9f6cec] Running
	I1014 13:57:35.032811   25306 system_pods.go:89] "kube-vip-ha-450021-m03" [de6e64e3-5d83-4ca7-8618-279cca6bf0c1] Running
	I1014 13:57:35.032816   25306 system_pods.go:89] "storage-provisioner" [1377adb3-3faf-4dee-a86e-9c4544a02d51] Running
	I1014 13:57:35.032822   25306 system_pods.go:126] duration metric: took 207.815391ms to wait for k8s-apps to be running ...
	I1014 13:57:35.032831   25306 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 13:57:35.032872   25306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 13:57:35.048661   25306 system_svc.go:56] duration metric: took 15.819815ms WaitForService to wait for kubelet
	I1014 13:57:35.048694   25306 kubeadm.go:582] duration metric: took 20.170783435s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 13:57:35.048713   25306 node_conditions.go:102] verifying NodePressure condition ...
	I1014 13:57:35.220270   25306 request.go:632] Waited for 171.481631ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.176:8443/api/v1/nodes
	I1014 13:57:35.220338   25306 round_trippers.go:463] GET https://192.168.39.176:8443/api/v1/nodes
	I1014 13:57:35.220343   25306 round_trippers.go:469] Request Headers:
	I1014 13:57:35.220351   25306 round_trippers.go:473]     Accept: application/json, */*
	I1014 13:57:35.220356   25306 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1014 13:57:35.224271   25306 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1014 13:57:35.225220   25306 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 13:57:35.225243   25306 node_conditions.go:123] node cpu capacity is 2
	I1014 13:57:35.225255   25306 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 13:57:35.225258   25306 node_conditions.go:123] node cpu capacity is 2
	I1014 13:57:35.225264   25306 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 13:57:35.225268   25306 node_conditions.go:123] node cpu capacity is 2
	I1014 13:57:35.225272   25306 node_conditions.go:105] duration metric: took 176.55497ms to run NodePressure ...
	I1014 13:57:35.225286   25306 start.go:241] waiting for startup goroutines ...
	I1014 13:57:35.225306   25306 start.go:255] writing updated cluster config ...
	I1014 13:57:35.225629   25306 ssh_runner.go:195] Run: rm -f paused
	I1014 13:57:35.278941   25306 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1014 13:57:35.281235   25306 out.go:177] * Done! kubectl is now configured to use "ha-450021" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 14 14:01:32 ha-450021 crio[655]: time="2024-10-14 14:01:32.252213954Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4f54d74c-8a2b-4b82-b304-f1c015bf0b3a name=/runtime.v1.RuntimeService/Version
	Oct 14 14:01:32 ha-450021 crio[655]: time="2024-10-14 14:01:32.253400673Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=74d2fef8-8c40-4e2a-b98e-944a6819e48c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 14:01:32 ha-450021 crio[655]: time="2024-10-14 14:01:32.253903959Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914492253880828,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=74d2fef8-8c40-4e2a-b98e-944a6819e48c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 14:01:32 ha-450021 crio[655]: time="2024-10-14 14:01:32.254668150Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=afaf2159-eae0-4fa9-a4ce-d8bd89be10a8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:32 ha-450021 crio[655]: time="2024-10-14 14:01:32.254725645Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=afaf2159-eae0-4fa9-a4ce-d8bd89be10a8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:32 ha-450021 crio[655]: time="2024-10-14 14:01:32.254960701Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a41053c31fcb74ad24a4417c885436510a42c2e477d721651ae65459748bfd17,PodSandboxId:c3201918bd10d1535ddb2ebef0aa3b55e3e997e18a90de29ee09c2a7cb289b47,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728914259057513833,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fkz82,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07dccd61-4a5a-4d82-ba70-df7e6ff6bb4c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1051cfacf1c9fba1500a3437ece4de024c0fac626340151d2e28cbc18dc67a85,PodSandboxId:49d4b2387dd65dbd67bcdc3c377ba15e05400c782a4e2980358881a9c87ca5f3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728914119581188349,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1377adb3-3faf-4dee-a86e-9c4544a02d51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b17b6d38f935951dfa1746d02ec45095af8e06f6258ed80913feba7a10224927,PodSandboxId:b83407d74496b7f16cdeead48267cc803ffacd743feae034b1233a8c93800582,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728914119554752984,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-btfml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292e08ef-5eec-4ebb-acf5-5b4b03e47724,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:138a0b23a09075071550a4b7808439fd0baef4054fc6a7a7d4e8bc9a4457abfe,PodSandboxId:e862ae5ec13c39ac9605ac5725a1018466957149e1a69b2e013f7a87d5095bee,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728914119562072468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h5s6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf78614c-8f
22-48f9-8a16-cfcffecadfcc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15af89d835eebb58d825b5cdfdcbcfc064fe27d95caa6667adfb0e714974996,PodSandboxId:10ad22ab64de39acac4028e06deccb0ee0084112ba58c2349599913bf0d931d6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728914107455260455,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-c2xkn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f821123-80f9-4fe5-b64c-fb641ec185ea,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eec863af38c114b5058f678da27f8ce8608a5cd97566d4e704e07ff87100124,PodSandboxId:40a3318e89ae5bc2fe2d145b32f19e419934ba96586add9c17a653799fad9d26,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172891410
4698984942,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmbpv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e09737a1-c663-4951-b6cb-c0690fbd8153,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69f6cdf690df6514a349ce87c438a718209e9a098486e719653e5ac84d645899,PodSandboxId:dcc284c053db656af8f5da1c1a80672bfee0353e44ea6e4a01814f37351dad87,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17289140950
79963768,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67c899a1266c35ae5a8a71fac8e2760,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4efae268f9ec331abbf180a9264d60144b2a22485b89d39a46207f1c40454221,PodSandboxId:ce558cb07ca8f68689235cad5912b7da5a8f1c75775d2e5f2e7e823fe5127da9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728914093274186361,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d575d608bbdadce4a654f35576809ec,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fbfff3b334bde93db2f81855492434f8be70767826f2e33734ab52ad522a7a,PodSandboxId:ee3335073bb66b262b3eabf6a735be75c2ddcef2fa54aff9245585e26dd713f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728914093280862312,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca49fb553a9c26ea8ae634afb933e7b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ebec97dfd405a7e2c8ad77d0255ca029054cfb1090eba8d4d3851bdb68213e1,PodSandboxId:bc7fe679de4dc3fdff7f7e05bcd59ce354148a5c261197612bf284921530e902,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728914093233135044,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d8c37c1aa9e38ec5865c9c3159f1b5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942c179e591a9c0a8a1d869cfc5456dcbfb37c78056f256b241c51aab8936a3e,PodSandboxId:efaae5865d8afa77d2901173ba9c38ea901ca40f040d82cc15e889b37ff5a83c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728914093143514748,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c293b9606d38e94bf353b2714c2a069,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=afaf2159-eae0-4fa9-a4ce-d8bd89be10a8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:32 ha-450021 crio[655]: time="2024-10-14 14:01:32.282656988Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=c747f7df-2a20-4fad-ab41-d19a4f830771 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 14 14:01:32 ha-450021 crio[655]: time="2024-10-14 14:01:32.282978093Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:c3201918bd10d1535ddb2ebef0aa3b55e3e997e18a90de29ee09c2a7cb289b47,Metadata:&PodSandboxMetadata{Name:busybox-7dff88458-fkz82,Uid:07dccd61-4a5a-4d82-ba70-df7e6ff6bb4c,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728914256575693813,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-7dff88458-fkz82,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07dccd61-4a5a-4d82-ba70-df7e6ff6bb4c,pod-template-hash: 7dff88458,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-14T13:57:36.259893995Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:49d4b2387dd65dbd67bcdc3c377ba15e05400c782a4e2980358881a9c87ca5f3,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:1377adb3-3faf-4dee-a86e-9c4544a02d51,Namespace:kube-system,Attempt:0,},State:SANDBO
X_READY,CreatedAt:1728914119351726434,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1377adb3-3faf-4dee-a86e-9c4544a02d51,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"ty
pe\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-10-14T13:55:19.021361497Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e862ae5ec13c39ac9605ac5725a1018466957149e1a69b2e013f7a87d5095bee,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-h5s6h,Uid:bf78614c-8f22-48f9-8a16-cfcffecadfcc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728914119347363603,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-h5s6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf78614c-8f22-48f9-8a16-cfcffecadfcc,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-14T13:55:19.020127157Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b83407d74496b7f16cdeead48267cc803ffacd743feae034b1233a8c93800582,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-btfml,Uid:292e08ef-5eec-4ebb-acf5-5b4b03e47724,Namespace:kube-system,Atte
mpt:0,},State:SANDBOX_READY,CreatedAt:1728914119317724514,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-btfml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292e08ef-5eec-4ebb-acf5-5b4b03e47724,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-14T13:55:19.012079483Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:10ad22ab64de39acac4028e06deccb0ee0084112ba58c2349599913bf0d931d6,Metadata:&PodSandboxMetadata{Name:kindnet-c2xkn,Uid:0f821123-80f9-4fe5-b64c-fb641ec185ea,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728914104500432788,Labels:map[string]string{app: kindnet,controller-revision-hash: 6f5b6b96c8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-c2xkn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f821123-80f9-4fe5-b64c-fb641ec185ea,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotati
ons:map[string]string{kubernetes.io/config.seen: 2024-10-14T13:55:04.175484891Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:40a3318e89ae5bc2fe2d145b32f19e419934ba96586add9c17a653799fad9d26,Metadata:&PodSandboxMetadata{Name:kube-proxy-dmbpv,Uid:e09737a1-c663-4951-b6cb-c0690fbd8153,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728914104494705515,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-dmbpv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e09737a1-c663-4951-b6cb-c0690fbd8153,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-14T13:55:04.175363632Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dcc284c053db656af8f5da1c1a80672bfee0353e44ea6e4a01814f37351dad87,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-450021,Uid:f67c899a1266c35ae5a8a71fac8e2760,Namespace:kube-system,Attempt:0,},Sta
te:SANDBOX_READY,CreatedAt:1728914093018484197,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67c899a1266c35ae5a8a71fac8e2760,},Annotations:map[string]string{kubernetes.io/config.hash: f67c899a1266c35ae5a8a71fac8e2760,kubernetes.io/config.seen: 2024-10-14T13:54:52.517031509Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ce558cb07ca8f68689235cad5912b7da5a8f1c75775d2e5f2e7e823fe5127da9,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-450021,Uid:4d575d608bbdadce4a654f35576809ec,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728914093012806933,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d575d608bbdadce4a654f35576809ec,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 4d57
5d608bbdadce4a654f35576809ec,kubernetes.io/config.seen: 2024-10-14T13:54:52.517030840Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bc7fe679de4dc3fdff7f7e05bcd59ce354148a5c261197612bf284921530e902,Metadata:&PodSandboxMetadata{Name:etcd-ha-450021,Uid:83d8c37c1aa9e38ec5865c9c3159f1b5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728914092991872661,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d8c37c1aa9e38ec5865c9c3159f1b5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.176:2379,kubernetes.io/config.hash: 83d8c37c1aa9e38ec5865c9c3159f1b5,kubernetes.io/config.seen: 2024-10-14T13:54:52.517024718Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:ee3335073bb66b262b3eabf6a735be75c2ddcef2fa54aff9245585e26dd713f7,Metadata:&PodSandboxMetadata{Name:kube-c
ontroller-manager-ha-450021,Uid:0ca49fb553a9c26ea8ae634afb933e7b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728914092982064459,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca49fb553a9c26ea8ae634afb933e7b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 0ca49fb553a9c26ea8ae634afb933e7b,kubernetes.io/config.seen: 2024-10-14T13:54:52.517029891Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:efaae5865d8afa77d2901173ba9c38ea901ca40f040d82cc15e889b37ff5a83c,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-450021,Uid:3c293b9606d38e94bf353b2714c2a069,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728914092978936100,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-450021,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c293b9606d38e94bf353b2714c2a069,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.176:8443,kubernetes.io/config.hash: 3c293b9606d38e94bf353b2714c2a069,kubernetes.io/config.seen: 2024-10-14T13:54:52.517028622Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=c747f7df-2a20-4fad-ab41-d19a4f830771 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 14 14:01:32 ha-450021 crio[655]: time="2024-10-14 14:01:32.284137086Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aa0a09d2-0e9e-44e0-b87d-f32981607c06 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:32 ha-450021 crio[655]: time="2024-10-14 14:01:32.284195404Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aa0a09d2-0e9e-44e0-b87d-f32981607c06 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:32 ha-450021 crio[655]: time="2024-10-14 14:01:32.284420886Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a41053c31fcb74ad24a4417c885436510a42c2e477d721651ae65459748bfd17,PodSandboxId:c3201918bd10d1535ddb2ebef0aa3b55e3e997e18a90de29ee09c2a7cb289b47,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728914259057513833,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fkz82,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07dccd61-4a5a-4d82-ba70-df7e6ff6bb4c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1051cfacf1c9fba1500a3437ece4de024c0fac626340151d2e28cbc18dc67a85,PodSandboxId:49d4b2387dd65dbd67bcdc3c377ba15e05400c782a4e2980358881a9c87ca5f3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728914119581188349,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1377adb3-3faf-4dee-a86e-9c4544a02d51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b17b6d38f935951dfa1746d02ec45095af8e06f6258ed80913feba7a10224927,PodSandboxId:b83407d74496b7f16cdeead48267cc803ffacd743feae034b1233a8c93800582,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728914119554752984,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-btfml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292e08ef-5eec-4ebb-acf5-5b4b03e47724,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:138a0b23a09075071550a4b7808439fd0baef4054fc6a7a7d4e8bc9a4457abfe,PodSandboxId:e862ae5ec13c39ac9605ac5725a1018466957149e1a69b2e013f7a87d5095bee,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728914119562072468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h5s6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf78614c-8f
22-48f9-8a16-cfcffecadfcc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15af89d835eebb58d825b5cdfdcbcfc064fe27d95caa6667adfb0e714974996,PodSandboxId:10ad22ab64de39acac4028e06deccb0ee0084112ba58c2349599913bf0d931d6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728914107455260455,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-c2xkn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f821123-80f9-4fe5-b64c-fb641ec185ea,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eec863af38c114b5058f678da27f8ce8608a5cd97566d4e704e07ff87100124,PodSandboxId:40a3318e89ae5bc2fe2d145b32f19e419934ba96586add9c17a653799fad9d26,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172891410
4698984942,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmbpv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e09737a1-c663-4951-b6cb-c0690fbd8153,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69f6cdf690df6514a349ce87c438a718209e9a098486e719653e5ac84d645899,PodSandboxId:dcc284c053db656af8f5da1c1a80672bfee0353e44ea6e4a01814f37351dad87,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17289140950
79963768,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67c899a1266c35ae5a8a71fac8e2760,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4efae268f9ec331abbf180a9264d60144b2a22485b89d39a46207f1c40454221,PodSandboxId:ce558cb07ca8f68689235cad5912b7da5a8f1c75775d2e5f2e7e823fe5127da9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728914093274186361,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d575d608bbdadce4a654f35576809ec,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fbfff3b334bde93db2f81855492434f8be70767826f2e33734ab52ad522a7a,PodSandboxId:ee3335073bb66b262b3eabf6a735be75c2ddcef2fa54aff9245585e26dd713f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728914093280862312,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca49fb553a9c26ea8ae634afb933e7b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ebec97dfd405a7e2c8ad77d0255ca029054cfb1090eba8d4d3851bdb68213e1,PodSandboxId:bc7fe679de4dc3fdff7f7e05bcd59ce354148a5c261197612bf284921530e902,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728914093233135044,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d8c37c1aa9e38ec5865c9c3159f1b5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942c179e591a9c0a8a1d869cfc5456dcbfb37c78056f256b241c51aab8936a3e,PodSandboxId:efaae5865d8afa77d2901173ba9c38ea901ca40f040d82cc15e889b37ff5a83c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728914093143514748,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c293b9606d38e94bf353b2714c2a069,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=aa0a09d2-0e9e-44e0-b87d-f32981607c06 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:32 ha-450021 crio[655]: time="2024-10-14 14:01:32.296987269Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=037a5a22-2913-4ce7-8585-ca0200905d83 name=/runtime.v1.RuntimeService/Version
	Oct 14 14:01:32 ha-450021 crio[655]: time="2024-10-14 14:01:32.297074358Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=037a5a22-2913-4ce7-8585-ca0200905d83 name=/runtime.v1.RuntimeService/Version
	Oct 14 14:01:32 ha-450021 crio[655]: time="2024-10-14 14:01:32.298989089Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0808cf5c-2ce1-422b-ac21-4da57b772d00 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 14:01:32 ha-450021 crio[655]: time="2024-10-14 14:01:32.299426822Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914492299406237,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0808cf5c-2ce1-422b-ac21-4da57b772d00 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 14:01:32 ha-450021 crio[655]: time="2024-10-14 14:01:32.300064590Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=852a868c-2077-4afd-a9c9-818e65ffed6f name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:32 ha-450021 crio[655]: time="2024-10-14 14:01:32.300122126Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=852a868c-2077-4afd-a9c9-818e65ffed6f name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:32 ha-450021 crio[655]: time="2024-10-14 14:01:32.300348473Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a41053c31fcb74ad24a4417c885436510a42c2e477d721651ae65459748bfd17,PodSandboxId:c3201918bd10d1535ddb2ebef0aa3b55e3e997e18a90de29ee09c2a7cb289b47,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728914259057513833,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fkz82,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07dccd61-4a5a-4d82-ba70-df7e6ff6bb4c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1051cfacf1c9fba1500a3437ece4de024c0fac626340151d2e28cbc18dc67a85,PodSandboxId:49d4b2387dd65dbd67bcdc3c377ba15e05400c782a4e2980358881a9c87ca5f3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728914119581188349,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1377adb3-3faf-4dee-a86e-9c4544a02d51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b17b6d38f935951dfa1746d02ec45095af8e06f6258ed80913feba7a10224927,PodSandboxId:b83407d74496b7f16cdeead48267cc803ffacd743feae034b1233a8c93800582,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728914119554752984,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-btfml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292e08ef-5eec-4ebb-acf5-5b4b03e47724,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:138a0b23a09075071550a4b7808439fd0baef4054fc6a7a7d4e8bc9a4457abfe,PodSandboxId:e862ae5ec13c39ac9605ac5725a1018466957149e1a69b2e013f7a87d5095bee,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728914119562072468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h5s6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf78614c-8f
22-48f9-8a16-cfcffecadfcc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15af89d835eebb58d825b5cdfdcbcfc064fe27d95caa6667adfb0e714974996,PodSandboxId:10ad22ab64de39acac4028e06deccb0ee0084112ba58c2349599913bf0d931d6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728914107455260455,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-c2xkn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f821123-80f9-4fe5-b64c-fb641ec185ea,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eec863af38c114b5058f678da27f8ce8608a5cd97566d4e704e07ff87100124,PodSandboxId:40a3318e89ae5bc2fe2d145b32f19e419934ba96586add9c17a653799fad9d26,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172891410
4698984942,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmbpv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e09737a1-c663-4951-b6cb-c0690fbd8153,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69f6cdf690df6514a349ce87c438a718209e9a098486e719653e5ac84d645899,PodSandboxId:dcc284c053db656af8f5da1c1a80672bfee0353e44ea6e4a01814f37351dad87,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17289140950
79963768,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67c899a1266c35ae5a8a71fac8e2760,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4efae268f9ec331abbf180a9264d60144b2a22485b89d39a46207f1c40454221,PodSandboxId:ce558cb07ca8f68689235cad5912b7da5a8f1c75775d2e5f2e7e823fe5127da9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728914093274186361,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d575d608bbdadce4a654f35576809ec,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fbfff3b334bde93db2f81855492434f8be70767826f2e33734ab52ad522a7a,PodSandboxId:ee3335073bb66b262b3eabf6a735be75c2ddcef2fa54aff9245585e26dd713f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728914093280862312,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca49fb553a9c26ea8ae634afb933e7b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ebec97dfd405a7e2c8ad77d0255ca029054cfb1090eba8d4d3851bdb68213e1,PodSandboxId:bc7fe679de4dc3fdff7f7e05bcd59ce354148a5c261197612bf284921530e902,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728914093233135044,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d8c37c1aa9e38ec5865c9c3159f1b5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942c179e591a9c0a8a1d869cfc5456dcbfb37c78056f256b241c51aab8936a3e,PodSandboxId:efaae5865d8afa77d2901173ba9c38ea901ca40f040d82cc15e889b37ff5a83c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728914093143514748,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c293b9606d38e94bf353b2714c2a069,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=852a868c-2077-4afd-a9c9-818e65ffed6f name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:32 ha-450021 crio[655]: time="2024-10-14 14:01:32.341868047Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6fc61b68-edb6-4793-b9a8-a6b44cdb1d8a name=/runtime.v1.RuntimeService/Version
	Oct 14 14:01:32 ha-450021 crio[655]: time="2024-10-14 14:01:32.341938393Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6fc61b68-edb6-4793-b9a8-a6b44cdb1d8a name=/runtime.v1.RuntimeService/Version
	Oct 14 14:01:32 ha-450021 crio[655]: time="2024-10-14 14:01:32.343107422Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=af4b6f29-5446-4e1a-a7d6-09471e3b4a60 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 14:01:32 ha-450021 crio[655]: time="2024-10-14 14:01:32.343535248Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914492343513644,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=af4b6f29-5446-4e1a-a7d6-09471e3b4a60 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 14:01:32 ha-450021 crio[655]: time="2024-10-14 14:01:32.344051212Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9dd3bd9d-6136-4d94-a7ba-51f9e4292024 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:32 ha-450021 crio[655]: time="2024-10-14 14:01:32.344124305Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9dd3bd9d-6136-4d94-a7ba-51f9e4292024 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:01:32 ha-450021 crio[655]: time="2024-10-14 14:01:32.344376854Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a41053c31fcb74ad24a4417c885436510a42c2e477d721651ae65459748bfd17,PodSandboxId:c3201918bd10d1535ddb2ebef0aa3b55e3e997e18a90de29ee09c2a7cb289b47,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1728914259057513833,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-fkz82,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 07dccd61-4a5a-4d82-ba70-df7e6ff6bb4c,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1051cfacf1c9fba1500a3437ece4de024c0fac626340151d2e28cbc18dc67a85,PodSandboxId:49d4b2387dd65dbd67bcdc3c377ba15e05400c782a4e2980358881a9c87ca5f3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728914119581188349,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1377adb3-3faf-4dee-a86e-9c4544a02d51,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.ter
minationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b17b6d38f935951dfa1746d02ec45095af8e06f6258ed80913feba7a10224927,PodSandboxId:b83407d74496b7f16cdeead48267cc803ffacd743feae034b1233a8c93800582,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728914119554752984,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-btfml,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 292e08ef-5eec-4ebb-acf5-5b4b03e47724,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"na
me\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:138a0b23a09075071550a4b7808439fd0baef4054fc6a7a7d4e8bc9a4457abfe,PodSandboxId:e862ae5ec13c39ac9605ac5725a1018466957149e1a69b2e013f7a87d5095bee,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728914119562072468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-h5s6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf78614c-8f
22-48f9-8a16-cfcffecadfcc,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b15af89d835eebb58d825b5cdfdcbcfc064fe27d95caa6667adfb0e714974996,PodSandboxId:10ad22ab64de39acac4028e06deccb0ee0084112ba58c2349599913bf0d931d6,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52,State:CO
NTAINER_RUNNING,CreatedAt:1728914107455260455,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-c2xkn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0f821123-80f9-4fe5-b64c-fb641ec185ea,},Annotations:map[string]string{io.kubernetes.container.hash: 1e4dce18,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5eec863af38c114b5058f678da27f8ce8608a5cd97566d4e704e07ff87100124,PodSandboxId:40a3318e89ae5bc2fe2d145b32f19e419934ba96586add9c17a653799fad9d26,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:172891410
4698984942,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dmbpv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e09737a1-c663-4951-b6cb-c0690fbd8153,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:69f6cdf690df6514a349ce87c438a718209e9a098486e719653e5ac84d645899,PodSandboxId:dcc284c053db656af8f5da1c1a80672bfee0353e44ea6e4a01814f37351dad87,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:18b729c2288dc4ef333dace4cac0c9a3afd63bd7d3c25bc857d39c79eea48460,State:CONTAINER_RUNNING,CreatedAt:17289140950
79963768,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f67c899a1266c35ae5a8a71fac8e2760,},Annotations:map[string]string{io.kubernetes.container.hash: 212a3ca5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4efae268f9ec331abbf180a9264d60144b2a22485b89d39a46207f1c40454221,PodSandboxId:ce558cb07ca8f68689235cad5912b7da5a8f1c75775d2e5f2e7e823fe5127da9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728914093274186361,Labels:map[string]string
{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d575d608bbdadce4a654f35576809ec,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09fbfff3b334bde93db2f81855492434f8be70767826f2e33734ab52ad522a7a,PodSandboxId:ee3335073bb66b262b3eabf6a735be75c2ddcef2fa54aff9245585e26dd713f7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728914093280862312,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ca49fb553a9c26ea8ae634afb933e7b,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ebec97dfd405a7e2c8ad77d0255ca029054cfb1090eba8d4d3851bdb68213e1,PodSandboxId:bc7fe679de4dc3fdff7f7e05bcd59ce354148a5c261197612bf284921530e902,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728914093233135044,Labels:map[string]string{io.kubernetes.container.
name: etcd,io.kubernetes.pod.name: etcd-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 83d8c37c1aa9e38ec5865c9c3159f1b5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:942c179e591a9c0a8a1d869cfc5456dcbfb37c78056f256b241c51aab8936a3e,PodSandboxId:efaae5865d8afa77d2901173ba9c38ea901ca40f040d82cc15e889b37ff5a83c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728914093143514748,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.na
me: kube-apiserver-ha-450021,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c293b9606d38e94bf353b2714c2a069,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9dd3bd9d-6136-4d94-a7ba-51f9e4292024 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a41053c31fcb7       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   c3201918bd10d       busybox-7dff88458-fkz82
	1051cfacf1c9f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   49d4b2387dd65       storage-provisioner
	138a0b23a0907       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   e862ae5ec13c3       coredns-7c65d6cfc9-h5s6h
	b17b6d38f9359       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   b83407d74496b       coredns-7c65d6cfc9-btfml
	b15af89d835ee       docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387    6 minutes ago       Running             kindnet-cni               0                   10ad22ab64de3       kindnet-c2xkn
	5eec863af38c1       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      6 minutes ago       Running             kube-proxy                0                   40a3318e89ae5       kube-proxy-dmbpv
	69f6cdf690df6       ghcr.io/kube-vip/kube-vip@sha256:805addee63aa68946df6a5b2dd410c9e658b7f69ddbfc8c0ea8a1486662d6413     6 minutes ago       Running             kube-vip                  0                   dcc284c053db6       kube-vip-ha-450021
	09fbfff3b334b       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      6 minutes ago       Running             kube-controller-manager   0                   ee3335073bb66       kube-controller-manager-ha-450021
	4efae268f9ec3       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      6 minutes ago       Running             kube-scheduler            0                   ce558cb07ca8f       kube-scheduler-ha-450021
	6ebec97dfd405       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   bc7fe679de4dc       etcd-ha-450021
	942c179e591a9       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      6 minutes ago       Running             kube-apiserver            0                   efaae5865d8af       kube-apiserver-ha-450021
	
	
	==> coredns [138a0b23a09075071550a4b7808439fd0baef4054fc6a7a7d4e8bc9a4457abfe] <==
	[INFO] 10.244.1.2:43382 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000121511s
	[INFO] 10.244.1.2:47675 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001762532s
	[INFO] 10.244.0.4:45515 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000083904s
	[INFO] 10.244.0.4:48451 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000149827s
	[INFO] 10.244.0.4:36014 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00015272s
	[INFO] 10.244.2.2:40959 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000194596s
	[INFO] 10.244.2.2:44151 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000212714s
	[INFO] 10.244.2.2:55911 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089682s
	[INFO] 10.244.1.2:47272 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001299918s
	[INFO] 10.244.1.2:44591 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000078031s
	[INFO] 10.244.1.2:37471 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000072637s
	[INFO] 10.244.0.4:52930 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152779s
	[INFO] 10.244.0.4:33266 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00005592s
	[INFO] 10.244.2.2:36389 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000275257s
	[INFO] 10.244.2.2:43232 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010928s
	[INFO] 10.244.2.2:38102 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092762s
	[INFO] 10.244.1.2:55403 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000222145s
	[INFO] 10.244.1.2:52540 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000102916s
	[INFO] 10.244.0.4:54154 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135993s
	[INFO] 10.244.0.4:36974 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000196993s
	[INFO] 10.244.0.4:54725 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000084888s
	[INFO] 10.244.2.2:57068 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000174437s
	[INFO] 10.244.1.2:46234 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000191287s
	[INFO] 10.244.1.2:39695 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000080939s
	[INFO] 10.244.1.2:36634 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000064427s
	
	
	==> coredns [b17b6d38f935951dfa1746d02ec45095af8e06f6258ed80913feba7a10224927] <==
	[INFO] 10.244.0.4:50854 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009051191s
	[INFO] 10.244.0.4:34637 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000156712s
	[INFO] 10.244.0.4:33648 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000081153s
	[INFO] 10.244.0.4:57465 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003251096s
	[INFO] 10.244.0.4:51433 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000118067s
	[INFO] 10.244.2.2:37621 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200056s
	[INFO] 10.244.2.2:41751 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001978554s
	[INFO] 10.244.2.2:33044 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001486731s
	[INFO] 10.244.2.2:43102 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00010457s
	[INFO] 10.244.2.2:36141 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000183057s
	[INFO] 10.244.1.2:35260 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014156s
	[INFO] 10.244.1.2:40737 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00207375s
	[INFO] 10.244.1.2:34377 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000109225s
	[INFO] 10.244.1.2:48194 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000096468s
	[INFO] 10.244.1.2:53649 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000092891s
	[INFO] 10.244.0.4:39691 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000126403s
	[INFO] 10.244.0.4:59011 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094158s
	[INFO] 10.244.2.2:46754 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133215s
	[INFO] 10.244.1.2:44424 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000161779s
	[INFO] 10.244.1.2:36322 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010124s
	[INFO] 10.244.0.4:56787 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000305054s
	[INFO] 10.244.2.2:56511 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168323s
	[INFO] 10.244.2.2:35510 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000291052s
	[INFO] 10.244.2.2:56208 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000174753s
	[INFO] 10.244.1.2:41964 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000119677s
	
	
	==> describe nodes <==
	Name:               ha-450021
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-450021
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=ha-450021
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_14T13_55_00_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 13:54:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-450021
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 14:01:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 13:58:03 +0000   Mon, 14 Oct 2024 13:54:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 13:58:03 +0000   Mon, 14 Oct 2024 13:54:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 13:58:03 +0000   Mon, 14 Oct 2024 13:54:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 13:58:03 +0000   Mon, 14 Oct 2024 13:55:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.176
	  Hostname:    ha-450021
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 0546a3427732401daacd4235ad46d465
	  System UUID:                0546a342-7732-401d-aacd-4235ad46d465
	  Boot ID:                    19dd080e-b9f2-467d-b5f2-41dbb07e1880
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-fkz82              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 coredns-7c65d6cfc9-btfml             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m28s
	  kube-system                 coredns-7c65d6cfc9-h5s6h             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m28s
	  kube-system                 etcd-ha-450021                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m33s
	  kube-system                 kindnet-c2xkn                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m28s
	  kube-system                 kube-apiserver-ha-450021             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 kube-controller-manager-ha-450021    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 kube-proxy-dmbpv                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m28s
	  kube-system                 kube-scheduler-ha-450021             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 kube-vip-ha-450021                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m33s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m27s                  kube-proxy       
	  Normal  NodeHasSufficientPID     6m40s (x7 over 6m40s)  kubelet          Node ha-450021 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m40s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m40s (x8 over 6m40s)  kubelet          Node ha-450021 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m40s (x8 over 6m40s)  kubelet          Node ha-450021 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m33s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m33s                  kubelet          Node ha-450021 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m33s                  kubelet          Node ha-450021 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m33s                  kubelet          Node ha-450021 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m29s                  node-controller  Node ha-450021 event: Registered Node ha-450021 in Controller
	  Normal  NodeReady                6m14s                  kubelet          Node ha-450021 status is now: NodeReady
	  Normal  RegisteredNode           5m27s                  node-controller  Node ha-450021 event: Registered Node ha-450021 in Controller
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-450021 event: Registered Node ha-450021 in Controller
	
	
	Name:               ha-450021-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-450021-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=ha-450021
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_14T13_55_59_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 13:55:56 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-450021-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 13:58:49 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 14 Oct 2024 13:57:58 +0000   Mon, 14 Oct 2024 13:59:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 14 Oct 2024 13:57:58 +0000   Mon, 14 Oct 2024 13:59:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 14 Oct 2024 13:57:58 +0000   Mon, 14 Oct 2024 13:59:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 14 Oct 2024 13:57:58 +0000   Mon, 14 Oct 2024 13:59:30 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.89
	  Hostname:    ha-450021-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 a42e43dc14cb4b949c605bff9ac6e0d6
	  System UUID:                a42e43dc-14cb-4b94-9c60-5bff9ac6e0d6
	  Boot ID:                    479e9a18-0fa8-4366-8acf-af40a06156d6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-nt6q5                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 etcd-ha-450021-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m35s
	  kube-system                 kindnet-2ghzc                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m36s
	  kube-system                 kube-apiserver-ha-450021-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m35s
	  kube-system                 kube-controller-manager-ha-450021-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m31s
	  kube-system                 kube-proxy-v24tf                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m36s
	  kube-system                 kube-scheduler-ha-450021-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m33s
	  kube-system                 kube-vip-ha-450021-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m32s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m36s (x8 over 5m37s)  kubelet          Node ha-450021-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m36s (x8 over 5m37s)  kubelet          Node ha-450021-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m36s (x7 over 5m37s)  kubelet          Node ha-450021-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m34s                  node-controller  Node ha-450021-m02 event: Registered Node ha-450021-m02 in Controller
	  Normal  RegisteredNode           5m27s                  node-controller  Node ha-450021-m02 event: Registered Node ha-450021-m02 in Controller
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-450021-m02 event: Registered Node ha-450021-m02 in Controller
	  Normal  NodeNotReady             2m2s                   node-controller  Node ha-450021-m02 status is now: NodeNotReady
	
	
	Name:               ha-450021-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-450021-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=ha-450021
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_14T13_57_14_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 13:57:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-450021-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 14:01:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 13:57:40 +0000   Mon, 14 Oct 2024 13:57:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 13:57:40 +0000   Mon, 14 Oct 2024 13:57:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 13:57:40 +0000   Mon, 14 Oct 2024 13:57:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 13:57:40 +0000   Mon, 14 Oct 2024 13:57:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.55
	  Hostname:    ha-450021-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 50171e2610d047279285af0bf8eead91
	  System UUID:                50171e26-10d0-4727-9285-af0bf8eead91
	  Boot ID:                    7b6afcf4-f39b-41c1-92d6-cc1e18f2f3ff
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-lrvnn                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 etcd-ha-450021-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m20s
	  kube-system                 kindnet-7jwgx                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m22s
	  kube-system                 kube-apiserver-ha-450021-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m20s
	  kube-system                 kube-controller-manager-ha-450021-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 kube-proxy-9tbfp                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 kube-scheduler-ha-450021-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 kube-vip-ha-450021-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m16s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m22s (x8 over 4m22s)  kubelet          Node ha-450021-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m22s (x8 over 4m22s)  kubelet          Node ha-450021-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m22s (x7 over 4m22s)  kubelet          Node ha-450021-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m19s                  node-controller  Node ha-450021-m03 event: Registered Node ha-450021-m03 in Controller
	  Normal  RegisteredNode           4m17s                  node-controller  Node ha-450021-m03 event: Registered Node ha-450021-m03 in Controller
	  Normal  RegisteredNode           4m12s                  node-controller  Node ha-450021-m03 event: Registered Node ha-450021-m03 in Controller
	
	
	Name:               ha-450021-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-450021-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=ha-450021
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_10_14T13_58_15_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 13:58:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-450021-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 14:01:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 13:58:45 +0000   Mon, 14 Oct 2024 13:58:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 13:58:45 +0000   Mon, 14 Oct 2024 13:58:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 13:58:45 +0000   Mon, 14 Oct 2024 13:58:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 13:58:45 +0000   Mon, 14 Oct 2024 13:58:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.127
	  Hostname:    ha-450021-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 c8da54fea409461c84c103e8552a3553
	  System UUID:                c8da54fe-a409-461c-84c1-03e8552a3553
	  Boot ID:                    ed9b9ad9-a71a-4814-ae07-6cc1c2775deb
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-478bj       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m18s
	  kube-system                 kube-proxy-2mfnd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m13s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m18s (x2 over 3m19s)  kubelet          Node ha-450021-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m18s (x2 over 3m19s)  kubelet          Node ha-450021-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m18s (x2 over 3m19s)  kubelet          Node ha-450021-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m17s                  node-controller  Node ha-450021-m04 event: Registered Node ha-450021-m04 in Controller
	  Normal  RegisteredNode           3m17s                  node-controller  Node ha-450021-m04 event: Registered Node ha-450021-m04 in Controller
	  Normal  RegisteredNode           3m14s                  node-controller  Node ha-450021-m04 event: Registered Node ha-450021-m04 in Controller
	  Normal  NodeReady                3m                     kubelet          Node ha-450021-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct14 13:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050735] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040529] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.861908] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.617931] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.603277] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.339591] systemd-fstab-generator[579]: Ignoring "noauto" option for root device
	[  +0.056090] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067047] systemd-fstab-generator[591]: Ignoring "noauto" option for root device
	[  +0.182956] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.129853] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.268814] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +3.909642] systemd-fstab-generator[746]: Ignoring "noauto" option for root device
	[  +4.099441] systemd-fstab-generator[876]: Ignoring "noauto" option for root device
	[  +0.067805] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.555395] systemd-fstab-generator[1292]: Ignoring "noauto" option for root device
	[  +0.098328] kauditd_printk_skb: 79 callbacks suppressed
	[Oct14 13:55] kauditd_printk_skb: 18 callbacks suppressed
	[ +14.850947] kauditd_printk_skb: 41 callbacks suppressed
	[Oct14 13:56] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [6ebec97dfd405a7e2c8ad77d0255ca029054cfb1090eba8d4d3851bdb68213e1] <==
	{"level":"warn","ts":"2024-10-14T14:01:32.597095Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:32.605054Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:32.608155Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:32.615765Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:32.621280Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:32.628050Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:32.632191Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:32.635713Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:32.643874Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:32.649630Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:32.655368Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:32.659263Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:32.660032Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:32.662987Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:32.665670Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:32.670483Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:32.676741Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:32.679109Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:32.688150Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:32.694520Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:32.698930Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:32.712502Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:32.718935Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:32.725475Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-10-14T14:01:32.760999Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"f70d523d4475ce3b","from":"f70d523d4475ce3b","remote-peer-id":"c0fc083165b74f9c","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 14:01:32 up 7 min,  0 users,  load average: 0.18, 0.20, 0.10
	Linux ha-450021 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [b15af89d835eebb58d825b5cdfdcbcfc064fe27d95caa6667adfb0e714974996] <==
	I1014 14:00:58.793233       1 main.go:300] handling current node
	I1014 14:01:08.792774       1 main.go:296] Handling node with IPs: map[192.168.39.127:{}]
	I1014 14:01:08.792894       1 main.go:323] Node ha-450021-m04 has CIDR [10.244.3.0/24] 
	I1014 14:01:08.793209       1 main.go:296] Handling node with IPs: map[192.168.39.176:{}]
	I1014 14:01:08.793270       1 main.go:300] handling current node
	I1014 14:01:08.793308       1 main.go:296] Handling node with IPs: map[192.168.39.89:{}]
	I1014 14:01:08.793385       1 main.go:323] Node ha-450021-m02 has CIDR [10.244.1.0/24] 
	I1014 14:01:08.793725       1 main.go:296] Handling node with IPs: map[192.168.39.55:{}]
	I1014 14:01:08.793788       1 main.go:323] Node ha-450021-m03 has CIDR [10.244.2.0/24] 
	I1014 14:01:18.792871       1 main.go:296] Handling node with IPs: map[192.168.39.176:{}]
	I1014 14:01:18.792903       1 main.go:300] handling current node
	I1014 14:01:18.792918       1 main.go:296] Handling node with IPs: map[192.168.39.89:{}]
	I1014 14:01:18.792922       1 main.go:323] Node ha-450021-m02 has CIDR [10.244.1.0/24] 
	I1014 14:01:18.793175       1 main.go:296] Handling node with IPs: map[192.168.39.55:{}]
	I1014 14:01:18.793264       1 main.go:323] Node ha-450021-m03 has CIDR [10.244.2.0/24] 
	I1014 14:01:18.793419       1 main.go:296] Handling node with IPs: map[192.168.39.127:{}]
	I1014 14:01:18.793492       1 main.go:323] Node ha-450021-m04 has CIDR [10.244.3.0/24] 
	I1014 14:01:28.801672       1 main.go:296] Handling node with IPs: map[192.168.39.176:{}]
	I1014 14:01:28.801769       1 main.go:300] handling current node
	I1014 14:01:28.801798       1 main.go:296] Handling node with IPs: map[192.168.39.89:{}]
	I1014 14:01:28.801815       1 main.go:323] Node ha-450021-m02 has CIDR [10.244.1.0/24] 
	I1014 14:01:28.802047       1 main.go:296] Handling node with IPs: map[192.168.39.55:{}]
	I1014 14:01:28.802088       1 main.go:323] Node ha-450021-m03 has CIDR [10.244.2.0/24] 
	I1014 14:01:28.802289       1 main.go:296] Handling node with IPs: map[192.168.39.127:{}]
	I1014 14:01:28.802315       1 main.go:323] Node ha-450021-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [942c179e591a9c0a8a1d869cfc5456dcbfb37c78056f256b241c51aab8936a3e] <==
	I1014 13:54:59.598140       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1014 13:54:59.663013       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1014 13:54:59.717856       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1014 13:55:03.816892       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1014 13:55:04.117644       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1014 13:55:56.847231       1 writers.go:122] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E1014 13:55:56.847740       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 10.384µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E1014 13:55:56.849144       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E1014 13:55:56.850518       1 writers.go:135] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E1014 13:55:56.851864       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.726003ms" method="POST" path="/api/v1/namespaces/kube-system/events" result=null
	E1014 13:57:40.356093       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42006: use of closed network connection
	E1014 13:57:40.548948       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42022: use of closed network connection
	E1014 13:57:40.734061       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42040: use of closed network connection
	E1014 13:57:40.931904       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42056: use of closed network connection
	E1014 13:57:41.132089       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42064: use of closed network connection
	E1014 13:57:41.311104       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42080: use of closed network connection
	E1014 13:57:41.483753       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42086: use of closed network connection
	E1014 13:57:41.673306       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:42104: use of closed network connection
	E1014 13:57:41.861924       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41084: use of closed network connection
	E1014 13:57:42.155414       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41118: use of closed network connection
	E1014 13:57:42.326032       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41138: use of closed network connection
	E1014 13:57:42.498111       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41150: use of closed network connection
	E1014 13:57:42.666091       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41168: use of closed network connection
	E1014 13:57:42.837965       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41180: use of closed network connection
	E1014 13:57:43.032348       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:41204: use of closed network connection
	
	
	==> kube-controller-manager [09fbfff3b334bde93db2f81855492434f8be70767826f2e33734ab52ad522a7a] <==
	I1014 13:58:14.814158       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:14.814232       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	E1014 13:58:14.983101       1 daemon_controller.go:329] "Unhandled Error" err="kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:\"\", APIVersion:\"\"}, ObjectMeta:v1.ObjectMeta{Name:\"kindnet\", GenerateName:\"\", Namespace:\"kube-system\", SelfLink:\"\", UID:\"131c0255-c34c-4638-a6ae-c00d282c1fc8\", ResourceVersion:\"944\", Generation:1, CreationTimestamp:time.Date(2024, time.October, 14, 13, 55, 0, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string{\"deprecated.daemonset.template.generation\":\"1\", \"kubectl.kubernetes.io/last-applied-configuration\":\"{\\\"apiVersion\\\":\\\"apps/v1\\\",\\\"kind\\\":\\\"DaemonSet\\\",\\\"metadata\\\":{\\\"annotations\\\":{},\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"},\\\"name\\\":\\\"kindnet\\\"
,\\\"namespace\\\":\\\"kube-system\\\"},\\\"spec\\\":{\\\"selector\\\":{\\\"matchLabels\\\":{\\\"app\\\":\\\"kindnet\\\"}},\\\"template\\\":{\\\"metadata\\\":{\\\"labels\\\":{\\\"app\\\":\\\"kindnet\\\",\\\"k8s-app\\\":\\\"kindnet\\\",\\\"tier\\\":\\\"node\\\"}},\\\"spec\\\":{\\\"containers\\\":[{\\\"env\\\":[{\\\"name\\\":\\\"HOST_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.hostIP\\\"}}},{\\\"name\\\":\\\"POD_IP\\\",\\\"valueFrom\\\":{\\\"fieldRef\\\":{\\\"fieldPath\\\":\\\"status.podIP\\\"}}},{\\\"name\\\":\\\"POD_SUBNET\\\",\\\"value\\\":\\\"10.244.0.0/16\\\"}],\\\"image\\\":\\\"docker.io/kindest/kindnetd:v20241007-36f62932\\\",\\\"name\\\":\\\"kindnet-cni\\\",\\\"resources\\\":{\\\"limits\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"requests\\\":{\\\"cpu\\\":\\\"100m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"securityContext\\\":{\\\"capabilities\\\":{\\\"add\\\":[\\\"NET_RAW\\\",\\\"NET_ADMIN\\\"]},\\\"privileged\\\":false},\\\"volumeMounts\\\":[{\\\"mountPath\\\
":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"cni-cfg\\\"},{\\\"mountPath\\\":\\\"/run/xtables.lock\\\",\\\"name\\\":\\\"xtables-lock\\\",\\\"readOnly\\\":false},{\\\"mountPath\\\":\\\"/lib/modules\\\",\\\"name\\\":\\\"lib-modules\\\",\\\"readOnly\\\":true}]}],\\\"hostNetwork\\\":true,\\\"serviceAccountName\\\":\\\"kindnet\\\",\\\"tolerations\\\":[{\\\"effect\\\":\\\"NoSchedule\\\",\\\"operator\\\":\\\"Exists\\\"}],\\\"volumes\\\":[{\\\"hostPath\\\":{\\\"path\\\":\\\"/etc/cni/net.d\\\",\\\"type\\\":\\\"DirectoryOrCreate\\\"},\\\"name\\\":\\\"cni-cfg\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/run/xtables.lock\\\",\\\"type\\\":\\\"FileOrCreate\\\"},\\\"name\\\":\\\"xtables-lock\\\"},{\\\"hostPath\\\":{\\\"path\\\":\\\"/lib/modules\\\"},\\\"name\\\":\\\"lib-modules\\\"}]}}}}\\n\"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000d57240), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:\"
\", GenerateName:\"\", Namespace:\"\", SelfLink:\"\", UID:\"\", ResourceVersion:\"\", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{\"app\":\"kindnet\", \"k8s-app\":\"kindnet\", \"tier\":\"node\"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:\"cni-cfg\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00075b248), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeC
laimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"xtables-lock\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00075b260), EmptyDir:(*v1.EmptyDirVolumeSource)(
nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxV
olumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}, v1.Volume{Name:\"lib-modules\", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00075b278), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), Azu
reFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil), Image:(*v1.ImageVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:\"kindnet-cni\", Image:\"docker.io/kindest/kindnetd:v20241007-36f62932\", Command:[]string(nil), Args:[]string(nil), WorkingDir:\"\", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:\"HOST_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSource)(0xc000d57280)}, v1.EnvVar{Name:\"POD_IP\", Value:\"\", ValueFrom:(*v1.EnvVarSo
urce)(0xc000d57300)}, v1.EnvVar{Name:\"POD_SUBNET\", Value:\"10.244.0.0/16\", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Requests:v1.ResourceList{\"cpu\":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"100m\", Format:\"DecimalSI\"}, \"memory\":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:\"50Mi\", Format:\"BinarySI\"}}, Claims:[]v1.ResourceClaim(nil)}, ResizePolicy:[]v1.ContainerResizePolicy(nil), RestartPolicy:(*v1.ContainerRestartPolicy)(nil), VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:\"cni-cfg\", ReadOnly:false
, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/etc/cni/net.d\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"xtables-lock\", ReadOnly:false, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/run/xtables.lock\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}, v1.VolumeMount{Name:\"lib-modules\", ReadOnly:true, RecursiveReadOnly:(*v1.RecursiveReadOnlyMode)(nil), MountPath:\"/lib/modules\", SubPath:\"\", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:\"\"}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:\"/dev/termination-log\", TerminationMessagePolicy:\"File\", ImagePullPolicy:\"IfNotPresent\", SecurityContext:(*v1.SecurityContext)(0xc001b502a0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralConta
iner(nil), RestartPolicy:\"Always\", TerminationGracePeriodSeconds:(*int64)(0xc001820428), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:\"ClusterFirst\", NodeSelector:map[string]string(nil), ServiceAccountName:\"kindnet\", DeprecatedServiceAccount:\"kindnet\", AutomountServiceAccountToken:(*bool)(nil), NodeName:\"\", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001d51480), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:\"\", Subdomain:\"\", Affinity:(*v1.Affinity)(nil), SchedulerName:\"default-scheduler\", Tolerations:[]v1.Toleration{v1.Toleration{Key:\"\", Operator:\"Exists\", Value:\"\", Effect:\"NoSchedule\", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"\", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Ov
erhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:\"RollingUpdate\", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001e15e60)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001820470)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:3, NumberMisscheduled:0, DesiredNumberScheduled:3, NumberReady:3, ObservedGeneration:1, UpdatedNumberScheduled:3, NumberAvailable:3, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps \"kindnet\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1014 13:58:14.983373       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:15.178688       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:15.243657       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:15.340286       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:15.399942       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:18.263248       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:18.263850       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-450021-m04"
	I1014 13:58:18.322338       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:24.991672       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:32.758209       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:32.758699       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-450021-m04"
	I1014 13:58:32.779681       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:33.281205       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:58:45.471689       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m04"
	I1014 13:59:30.147306       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-450021-m04"
	I1014 13:59:30.148143       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m02"
	I1014 13:59:30.170693       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m02"
	I1014 13:59:30.349046       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="34.558914ms"
	I1014 13:59:30.349473       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="165.118µs"
	I1014 13:59:33.404625       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m02"
	I1014 13:59:35.409214       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-450021-m02"
	
	
	==> kube-proxy [5eec863af38c114b5058f678da27f8ce8608a5cd97566d4e704e07ff87100124] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1014 13:55:05.027976       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1014 13:55:05.042612       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.176"]
	E1014 13:55:05.042701       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 13:55:05.077520       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1014 13:55:05.077626       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 13:55:05.077653       1 server_linux.go:169] "Using iptables Proxier"
	I1014 13:55:05.080947       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 13:55:05.081416       1 server.go:483] "Version info" version="v1.31.1"
	I1014 13:55:05.081449       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 13:55:05.084048       1 config.go:199] "Starting service config controller"
	I1014 13:55:05.084244       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 13:55:05.084407       1 config.go:105] "Starting endpoint slice config controller"
	I1014 13:55:05.084429       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 13:55:05.085497       1 config.go:328] "Starting node config controller"
	I1014 13:55:05.085525       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 13:55:05.185149       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1014 13:55:05.185195       1 shared_informer.go:320] Caches are synced for service config
	I1014 13:55:05.185638       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [4efae268f9ec331abbf180a9264d60144b2a22485b89d39a46207f1c40454221] <==
	W1014 13:54:57.431755       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1014 13:54:57.431801       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 13:54:57.619315       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1014 13:54:57.619367       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 13:54:57.631913       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1014 13:54:57.632033       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1014 13:54:57.666200       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1014 13:54:57.666268       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 13:54:57.675854       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1014 13:54:57.675918       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 13:54:57.682854       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1014 13:54:57.683283       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1014 13:54:57.820025       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1014 13:54:57.820087       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 13:55:00.246826       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1014 13:57:36.278433       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-fkz82\": pod busybox-7dff88458-fkz82 is already assigned to node \"ha-450021\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-fkz82" node="ha-450021"
	E1014 13:57:36.278688       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 07dccd61-4a5a-4d82-ba70-df7e6ff6bb4c(default/busybox-7dff88458-fkz82) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-fkz82"
	E1014 13:57:36.278737       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-fkz82\": pod busybox-7dff88458-fkz82 is already assigned to node \"ha-450021\"" pod="default/busybox-7dff88458-fkz82"
	I1014 13:57:36.278788       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-fkz82" node="ha-450021"
	E1014 13:57:36.279144       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lrvnn\": pod busybox-7dff88458-lrvnn is already assigned to node \"ha-450021-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-lrvnn" node="ha-450021-m03"
	E1014 13:57:36.279201       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c0e6c9da-2bbd-4814-9310-ab74d5a3e09d(default/busybox-7dff88458-lrvnn) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-lrvnn"
	E1014 13:57:36.279240       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-lrvnn\": pod busybox-7dff88458-lrvnn is already assigned to node \"ha-450021-m03\"" pod="default/busybox-7dff88458-lrvnn"
	I1014 13:57:36.279273       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-lrvnn" node="ha-450021-m03"
	E1014 13:58:14.867309       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-2mfnd\": pod kube-proxy-2mfnd is already assigned to node \"ha-450021-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-2mfnd" node="ha-450021-m04"
	E1014 13:58:14.867404       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-2mfnd\": pod kube-proxy-2mfnd is already assigned to node \"ha-450021-m04\"" pod="kube-system/kube-proxy-2mfnd"
	
	
	==> kubelet <==
	Oct 14 13:59:59 ha-450021 kubelet[1299]: E1014 13:59:59.850190    1299 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914399849941739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 13:59:59 ha-450021 kubelet[1299]: E1014 13:59:59.850218    1299 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914399849941739,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:09 ha-450021 kubelet[1299]: E1014 14:00:09.852474    1299 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914409852112835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:09 ha-450021 kubelet[1299]: E1014 14:00:09.852527    1299 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914409852112835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:19 ha-450021 kubelet[1299]: E1014 14:00:19.856761    1299 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914419856453814,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:19 ha-450021 kubelet[1299]: E1014 14:00:19.856806    1299 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914419856453814,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:29 ha-450021 kubelet[1299]: E1014 14:00:29.858206    1299 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914429857922237,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:29 ha-450021 kubelet[1299]: E1014 14:00:29.858470    1299 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914429857922237,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:39 ha-450021 kubelet[1299]: E1014 14:00:39.861764    1299 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914439861102356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:39 ha-450021 kubelet[1299]: E1014 14:00:39.861870    1299 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914439861102356,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:49 ha-450021 kubelet[1299]: E1014 14:00:49.864513    1299 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914449864091872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:49 ha-450021 kubelet[1299]: E1014 14:00:49.864550    1299 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914449864091872,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:59 ha-450021 kubelet[1299]: E1014 14:00:59.724357    1299 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 14:00:59 ha-450021 kubelet[1299]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 14:00:59 ha-450021 kubelet[1299]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 14:00:59 ha-450021 kubelet[1299]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 14:00:59 ha-450021 kubelet[1299]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 14:00:59 ha-450021 kubelet[1299]: E1014 14:00:59.866616    1299 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914459866140857,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:00:59 ha-450021 kubelet[1299]: E1014 14:00:59.866661    1299 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914459866140857,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:01:09 ha-450021 kubelet[1299]: E1014 14:01:09.869535    1299 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914469868732835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:01:09 ha-450021 kubelet[1299]: E1014 14:01:09.869642    1299 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914469868732835,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:01:19 ha-450021 kubelet[1299]: E1014 14:01:19.870997    1299 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914479870763162,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:01:19 ha-450021 kubelet[1299]: E1014 14:01:19.871040    1299 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914479870763162,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:01:29 ha-450021 kubelet[1299]: E1014 14:01:29.872833    1299 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914489872491681,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 14:01:29 ha-450021 kubelet[1299]: E1014 14:01:29.872911    1299 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728914489872491681,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156102,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-450021 -n ha-450021
helpers_test.go:261: (dbg) Run:  kubectl --context ha-450021 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (416.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-450021 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-450021 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-450021 -v=7 --alsologtostderr: exit status 82 (2m1.849570974s)

                                                
                                                
-- stdout --
	* Stopping node "ha-450021-m04"  ...
	* Stopping node "ha-450021-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 14:01:33.824649   30529 out.go:345] Setting OutFile to fd 1 ...
	I1014 14:01:33.824775   30529 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:01:33.824788   30529 out.go:358] Setting ErrFile to fd 2...
	I1014 14:01:33.824795   30529 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:01:33.825079   30529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
	I1014 14:01:33.825397   30529 out.go:352] Setting JSON to false
	I1014 14:01:33.825527   30529 mustload.go:65] Loading cluster: ha-450021
	I1014 14:01:33.826106   30529 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 14:01:33.826204   30529 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/config.json ...
	I1014 14:01:33.826445   30529 mustload.go:65] Loading cluster: ha-450021
	I1014 14:01:33.826659   30529 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 14:01:33.826714   30529 stop.go:39] StopHost: ha-450021-m04
	I1014 14:01:33.827278   30529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:01:33.827333   30529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:01:33.841875   30529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45259
	I1014 14:01:33.842343   30529 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:01:33.842933   30529 main.go:141] libmachine: Using API Version  1
	I1014 14:01:33.842954   30529 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:01:33.843313   30529 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:01:33.847468   30529 out.go:177] * Stopping node "ha-450021-m04"  ...
	I1014 14:01:33.848921   30529 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1014 14:01:33.848966   30529 main.go:141] libmachine: (ha-450021-m04) Calling .DriverName
	I1014 14:01:33.849233   30529 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1014 14:01:33.849268   30529 main.go:141] libmachine: (ha-450021-m04) Calling .GetSSHHostname
	I1014 14:01:33.852235   30529 main.go:141] libmachine: (ha-450021-m04) DBG | domain ha-450021-m04 has defined MAC address 52:54:00:89:83:25 in network mk-ha-450021
	I1014 14:01:33.852650   30529 main.go:141] libmachine: (ha-450021-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:83:25", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:57:58 +0000 UTC Type:0 Mac:52:54:00:89:83:25 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-450021-m04 Clientid:01:52:54:00:89:83:25}
	I1014 14:01:33.852689   30529 main.go:141] libmachine: (ha-450021-m04) DBG | domain ha-450021-m04 has defined IP address 192.168.39.127 and MAC address 52:54:00:89:83:25 in network mk-ha-450021
	I1014 14:01:33.852906   30529 main.go:141] libmachine: (ha-450021-m04) Calling .GetSSHPort
	I1014 14:01:33.853123   30529 main.go:141] libmachine: (ha-450021-m04) Calling .GetSSHKeyPath
	I1014 14:01:33.853339   30529 main.go:141] libmachine: (ha-450021-m04) Calling .GetSSHUsername
	I1014 14:01:33.853464   30529 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m04/id_rsa Username:docker}
	I1014 14:01:33.943805   30529 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1014 14:01:33.997941   30529 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1014 14:01:34.051013   30529 main.go:141] libmachine: Stopping "ha-450021-m04"...
	I1014 14:01:34.051040   30529 main.go:141] libmachine: (ha-450021-m04) Calling .GetState
	I1014 14:01:34.052460   30529 main.go:141] libmachine: (ha-450021-m04) Calling .Stop
	I1014 14:01:34.055865   30529 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 0/120
	I1014 14:01:35.213530   30529 main.go:141] libmachine: (ha-450021-m04) Calling .GetState
	I1014 14:01:35.214750   30529 main.go:141] libmachine: Machine "ha-450021-m04" was stopped.
	I1014 14:01:35.214767   30529 stop.go:75] duration metric: took 1.365854451s to stop
	I1014 14:01:35.214802   30529 stop.go:39] StopHost: ha-450021-m03
	I1014 14:01:35.215123   30529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:01:35.215194   30529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:01:35.229353   30529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39133
	I1014 14:01:35.229800   30529 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:01:35.230282   30529 main.go:141] libmachine: Using API Version  1
	I1014 14:01:35.230308   30529 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:01:35.230624   30529 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:01:35.232603   30529 out.go:177] * Stopping node "ha-450021-m03"  ...
	I1014 14:01:35.234099   30529 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1014 14:01:35.234118   30529 main.go:141] libmachine: (ha-450021-m03) Calling .DriverName
	I1014 14:01:35.234335   30529 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1014 14:01:35.234359   30529 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHHostname
	I1014 14:01:35.237390   30529 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 14:01:35.237838   30529 main.go:141] libmachine: (ha-450021-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:af:04:2c", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:56:34 +0000 UTC Type:0 Mac:52:54:00:af:04:2c Iaid: IPaddr:192.168.39.55 Prefix:24 Hostname:ha-450021-m03 Clientid:01:52:54:00:af:04:2c}
	I1014 14:01:35.237868   30529 main.go:141] libmachine: (ha-450021-m03) DBG | domain ha-450021-m03 has defined IP address 192.168.39.55 and MAC address 52:54:00:af:04:2c in network mk-ha-450021
	I1014 14:01:35.237993   30529 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHPort
	I1014 14:01:35.238143   30529 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHKeyPath
	I1014 14:01:35.238291   30529 main.go:141] libmachine: (ha-450021-m03) Calling .GetSSHUsername
	I1014 14:01:35.238409   30529 sshutil.go:53] new ssh client: &{IP:192.168.39.55 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m03/id_rsa Username:docker}
	I1014 14:01:35.323832   30529 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1014 14:01:35.377227   30529 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1014 14:01:35.432534   30529 main.go:141] libmachine: Stopping "ha-450021-m03"...
	I1014 14:01:35.432566   30529 main.go:141] libmachine: (ha-450021-m03) Calling .GetState
	I1014 14:01:35.434070   30529 main.go:141] libmachine: (ha-450021-m03) Calling .Stop
	I1014 14:01:35.437729   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 0/120
	I1014 14:01:36.439097   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 1/120
	I1014 14:01:37.440213   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 2/120
	I1014 14:01:38.441445   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 3/120
	I1014 14:01:39.442803   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 4/120
	I1014 14:01:40.444459   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 5/120
	I1014 14:01:41.446023   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 6/120
	I1014 14:01:42.447319   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 7/120
	I1014 14:01:43.448676   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 8/120
	I1014 14:01:44.450318   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 9/120
	I1014 14:01:45.452464   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 10/120
	I1014 14:01:46.454167   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 11/120
	I1014 14:01:47.455888   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 12/120
	I1014 14:01:48.457526   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 13/120
	I1014 14:01:49.458894   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 14/120
	I1014 14:01:50.460790   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 15/120
	I1014 14:01:51.462343   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 16/120
	I1014 14:01:52.463550   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 17/120
	I1014 14:01:53.465225   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 18/120
	I1014 14:01:54.466551   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 19/120
	I1014 14:01:55.468568   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 20/120
	I1014 14:01:56.470418   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 21/120
	I1014 14:01:57.471771   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 22/120
	I1014 14:01:58.473552   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 23/120
	I1014 14:01:59.474929   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 24/120
	I1014 14:02:00.476642   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 25/120
	I1014 14:02:01.478226   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 26/120
	I1014 14:02:02.479809   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 27/120
	I1014 14:02:03.481262   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 28/120
	I1014 14:02:04.482774   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 29/120
	I1014 14:02:05.484839   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 30/120
	I1014 14:02:06.486351   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 31/120
	I1014 14:02:07.487730   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 32/120
	I1014 14:02:08.489394   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 33/120
	I1014 14:02:09.490716   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 34/120
	I1014 14:02:10.492464   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 35/120
	I1014 14:02:11.493824   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 36/120
	I1014 14:02:12.495167   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 37/120
	I1014 14:02:13.496709   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 38/120
	I1014 14:02:14.497947   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 39/120
	I1014 14:02:15.499804   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 40/120
	I1014 14:02:16.501276   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 41/120
	I1014 14:02:17.502691   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 42/120
	I1014 14:02:18.503952   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 43/120
	I1014 14:02:19.505300   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 44/120
	I1014 14:02:20.507497   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 45/120
	I1014 14:02:21.508856   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 46/120
	I1014 14:02:22.510185   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 47/120
	I1014 14:02:23.511864   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 48/120
	I1014 14:02:24.513187   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 49/120
	I1014 14:02:25.515199   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 50/120
	I1014 14:02:26.517198   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 51/120
	I1014 14:02:27.518425   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 52/120
	I1014 14:02:28.519784   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 53/120
	I1014 14:02:29.521032   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 54/120
	I1014 14:02:30.522228   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 55/120
	I1014 14:02:31.523442   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 56/120
	I1014 14:02:32.524621   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 57/120
	I1014 14:02:33.525871   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 58/120
	I1014 14:02:34.527544   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 59/120
	I1014 14:02:35.529241   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 60/120
	I1014 14:02:36.531275   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 61/120
	I1014 14:02:37.533355   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 62/120
	I1014 14:02:38.534838   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 63/120
	I1014 14:02:39.536065   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 64/120
	I1014 14:02:40.537823   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 65/120
	I1014 14:02:41.539582   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 66/120
	I1014 14:02:42.541153   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 67/120
	I1014 14:02:43.542428   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 68/120
	I1014 14:02:44.543828   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 69/120
	I1014 14:02:45.545658   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 70/120
	I1014 14:02:46.546870   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 71/120
	I1014 14:02:47.548171   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 72/120
	I1014 14:02:48.549323   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 73/120
	I1014 14:02:49.550768   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 74/120
	I1014 14:02:50.552482   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 75/120
	I1014 14:02:51.554228   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 76/120
	I1014 14:02:52.555563   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 77/120
	I1014 14:02:53.557007   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 78/120
	I1014 14:02:54.558304   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 79/120
	I1014 14:02:55.559787   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 80/120
	I1014 14:02:56.561243   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 81/120
	I1014 14:02:57.562677   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 82/120
	I1014 14:02:58.564062   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 83/120
	I1014 14:02:59.565373   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 84/120
	I1014 14:03:00.567456   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 85/120
	I1014 14:03:01.568807   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 86/120
	I1014 14:03:02.570363   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 87/120
	I1014 14:03:03.571798   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 88/120
	I1014 14:03:04.572951   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 89/120
	I1014 14:03:05.574574   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 90/120
	I1014 14:03:06.576222   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 91/120
	I1014 14:03:07.577791   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 92/120
	I1014 14:03:08.579266   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 93/120
	I1014 14:03:09.581131   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 94/120
	I1014 14:03:10.582943   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 95/120
	I1014 14:03:11.585182   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 96/120
	I1014 14:03:12.586628   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 97/120
	I1014 14:03:13.588625   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 98/120
	I1014 14:03:14.590709   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 99/120
	I1014 14:03:15.592317   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 100/120
	I1014 14:03:16.593735   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 101/120
	I1014 14:03:17.595212   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 102/120
	I1014 14:03:18.597095   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 103/120
	I1014 14:03:19.598423   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 104/120
	I1014 14:03:20.600236   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 105/120
	I1014 14:03:21.601613   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 106/120
	I1014 14:03:22.603189   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 107/120
	I1014 14:03:23.604408   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 108/120
	I1014 14:03:24.605756   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 109/120
	I1014 14:03:25.607619   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 110/120
	I1014 14:03:26.609097   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 111/120
	I1014 14:03:27.610608   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 112/120
	I1014 14:03:28.611787   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 113/120
	I1014 14:03:29.613179   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 114/120
	I1014 14:03:30.614693   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 115/120
	I1014 14:03:31.616039   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 116/120
	I1014 14:03:32.617349   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 117/120
	I1014 14:03:33.618689   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 118/120
	I1014 14:03:34.620134   30529 main.go:141] libmachine: (ha-450021-m03) Waiting for machine to stop 119/120
	I1014 14:03:35.620932   30529 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1014 14:03:35.621007   30529 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1014 14:03:35.622843   30529 out.go:201] 
	W1014 14:03:35.624043   30529 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1014 14:03:35.624059   30529 out.go:270] * 
	* 
	W1014 14:03:35.627115   30529 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 14:03:35.628490   30529 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:466: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-450021 -v=7 --alsologtostderr" : exit status 82
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-450021 --wait=true -v=7 --alsologtostderr
E1014 14:03:36.994491   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/functional-917108/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:04:04.703483   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/functional-917108/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:06:06.400823   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:07:29.466286   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-450021 --wait=true -v=7 --alsologtostderr: (4m51.71874737s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-450021
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-450021 -n ha-450021
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-450021 logs -n 25: (2.314504824s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-450021 cp ha-450021-m03:/home/docker/cp-test.txt                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m02:/home/docker/cp-test_ha-450021-m03_ha-450021-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n ha-450021-m02 sudo cat                                          | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /home/docker/cp-test_ha-450021-m03_ha-450021-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-450021 cp ha-450021-m03:/home/docker/cp-test.txt                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04:/home/docker/cp-test_ha-450021-m03_ha-450021-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n ha-450021-m04 sudo cat                                          | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /home/docker/cp-test_ha-450021-m03_ha-450021-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-450021 cp testdata/cp-test.txt                                                | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-450021 cp ha-450021-m04:/home/docker/cp-test.txt                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3029314565/001/cp-test_ha-450021-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-450021 cp ha-450021-m04:/home/docker/cp-test.txt                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021:/home/docker/cp-test_ha-450021-m04_ha-450021.txt                       |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n ha-450021 sudo cat                                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /home/docker/cp-test_ha-450021-m04_ha-450021.txt                                 |           |         |         |                     |                     |
	| cp      | ha-450021 cp ha-450021-m04:/home/docker/cp-test.txt                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m02:/home/docker/cp-test_ha-450021-m04_ha-450021-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n ha-450021-m02 sudo cat                                          | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /home/docker/cp-test_ha-450021-m04_ha-450021-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-450021 cp ha-450021-m04:/home/docker/cp-test.txt                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m03:/home/docker/cp-test_ha-450021-m04_ha-450021-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n ha-450021-m03 sudo cat                                          | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /home/docker/cp-test_ha-450021-m04_ha-450021-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-450021 node stop m02 -v=7                                                     | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-450021 node start m02 -v=7                                                    | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 14:01 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-450021 -v=7                                                           | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 14:01 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-450021 -v=7                                                                | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 14:01 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-450021 --wait=true -v=7                                                    | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 14:03 UTC | 14 Oct 24 14:08 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-450021                                                                | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 14:08 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 14:03:35
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 14:03:35.675229   31000 out.go:345] Setting OutFile to fd 1 ...
	I1014 14:03:35.675447   31000 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:03:35.675455   31000 out.go:358] Setting ErrFile to fd 2...
	I1014 14:03:35.675459   31000 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:03:35.675660   31000 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
	I1014 14:03:35.676174   31000 out.go:352] Setting JSON to false
	I1014 14:03:35.677032   31000 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2766,"bootTime":1728911850,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 14:03:35.677136   31000 start.go:139] virtualization: kvm guest
	I1014 14:03:35.682503   31000 out.go:177] * [ha-450021] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1014 14:03:35.683954   31000 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 14:03:35.683957   31000 notify.go:220] Checking for updates...
	I1014 14:03:35.685800   31000 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 14:03:35.687186   31000 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 14:03:35.688488   31000 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 14:03:35.689719   31000 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 14:03:35.690884   31000 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 14:03:35.692618   31000 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 14:03:35.692727   31000 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 14:03:35.693178   31000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:03:35.693216   31000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:03:35.708628   31000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32775
	I1014 14:03:35.709179   31000 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:03:35.709787   31000 main.go:141] libmachine: Using API Version  1
	I1014 14:03:35.709807   31000 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:03:35.710211   31000 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:03:35.710398   31000 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 14:03:35.745574   31000 out.go:177] * Using the kvm2 driver based on existing profile
	I1014 14:03:35.746814   31000 start.go:297] selected driver: kvm2
	I1014 14:03:35.746827   31000 start.go:901] validating driver "kvm2" against &{Name:ha-450021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.127 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false de
fault-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 14:03:35.746978   31000 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 14:03:35.747295   31000 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 14:03:35.747369   31000 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19790-7836/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 14:03:35.763552   31000 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1014 14:03:35.764664   31000 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 14:03:35.764713   31000 cni.go:84] Creating CNI manager for ""
	I1014 14:03:35.764800   31000 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1014 14:03:35.764878   31000 start.go:340] cluster config:
	{Name:ha-450021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-450021 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.127 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:fa
lse headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 14:03:35.765096   31000 iso.go:125] acquiring lock: {Name:mk2e2e780b05ead4007a93e6b56d28c6081926bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 14:03:35.766942   31000 out.go:177] * Starting "ha-450021" primary control-plane node in "ha-450021" cluster
	I1014 14:03:35.768174   31000 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 14:03:35.768217   31000 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1014 14:03:35.768225   31000 cache.go:56] Caching tarball of preloaded images
	I1014 14:03:35.768312   31000 preload.go:172] Found /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 14:03:35.768322   31000 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1014 14:03:35.768450   31000 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/config.json ...
	I1014 14:03:35.768649   31000 start.go:360] acquireMachinesLock for ha-450021: {Name:mk0b928e3a7c606d1470b2064477a22c5e59969d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 14:03:35.768690   31000 start.go:364] duration metric: took 22.827µs to acquireMachinesLock for "ha-450021"
	I1014 14:03:35.768701   31000 start.go:96] Skipping create...Using existing machine configuration
	I1014 14:03:35.768711   31000 fix.go:54] fixHost starting: 
	I1014 14:03:35.768954   31000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:03:35.768991   31000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:03:35.783295   31000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42113
	I1014 14:03:35.783727   31000 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:03:35.784190   31000 main.go:141] libmachine: Using API Version  1
	I1014 14:03:35.784212   31000 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:03:35.784520   31000 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:03:35.784725   31000 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 14:03:35.784868   31000 main.go:141] libmachine: (ha-450021) Calling .GetState
	I1014 14:03:35.786329   31000 fix.go:112] recreateIfNeeded on ha-450021: state=Running err=<nil>
	W1014 14:03:35.786356   31000 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 14:03:35.788248   31000 out.go:177] * Updating the running kvm2 "ha-450021" VM ...
	I1014 14:03:35.789392   31000 machine.go:93] provisionDockerMachine start ...
	I1014 14:03:35.789411   31000 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 14:03:35.789585   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 14:03:35.792166   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:03:35.792590   31000 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 14:03:35.792607   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:03:35.792784   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 14:03:35.792924   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 14:03:35.793081   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 14:03:35.793234   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 14:03:35.793389   31000 main.go:141] libmachine: Using SSH client type: native
	I1014 14:03:35.793582   31000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1014 14:03:35.793595   31000 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 14:03:35.924005   31000 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-450021
	
	I1014 14:03:35.924032   31000 main.go:141] libmachine: (ha-450021) Calling .GetMachineName
	I1014 14:03:35.924265   31000 buildroot.go:166] provisioning hostname "ha-450021"
	I1014 14:03:35.924285   31000 main.go:141] libmachine: (ha-450021) Calling .GetMachineName
	I1014 14:03:35.924481   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 14:03:35.926901   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:03:35.927256   31000 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 14:03:35.927282   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:03:35.927425   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 14:03:35.927600   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 14:03:35.927760   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 14:03:35.927899   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 14:03:35.928056   31000 main.go:141] libmachine: Using SSH client type: native
	I1014 14:03:35.928220   31000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1014 14:03:35.928230   31000 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-450021 && echo "ha-450021" | sudo tee /etc/hostname
	I1014 14:03:36.060224   31000 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-450021
	
	I1014 14:03:36.060249   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 14:03:36.062711   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:03:36.063022   31000 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 14:03:36.063046   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:03:36.063244   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 14:03:36.063447   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 14:03:36.063598   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 14:03:36.063713   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 14:03:36.063886   31000 main.go:141] libmachine: Using SSH client type: native
	I1014 14:03:36.064088   31000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1014 14:03:36.064105   31000 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-450021' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-450021/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-450021' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 14:03:36.183775   31000 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 14:03:36.183807   31000 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 14:03:36.183824   31000 buildroot.go:174] setting up certificates
	I1014 14:03:36.183831   31000 provision.go:84] configureAuth start
	I1014 14:03:36.183844   31000 main.go:141] libmachine: (ha-450021) Calling .GetMachineName
	I1014 14:03:36.184133   31000 main.go:141] libmachine: (ha-450021) Calling .GetIP
	I1014 14:03:36.186458   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:03:36.186809   31000 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 14:03:36.186835   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:03:36.186957   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 14:03:36.189094   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:03:36.189486   31000 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 14:03:36.189511   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:03:36.189668   31000 provision.go:143] copyHostCerts
	I1014 14:03:36.189693   31000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 14:03:36.189723   31000 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 14:03:36.189740   31000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 14:03:36.189805   31000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 14:03:36.189897   31000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 14:03:36.189936   31000 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 14:03:36.189943   31000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 14:03:36.189969   31000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 14:03:36.190025   31000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 14:03:36.190042   31000 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 14:03:36.190045   31000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 14:03:36.190066   31000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 14:03:36.190128   31000 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.ha-450021 san=[127.0.0.1 192.168.39.176 ha-450021 localhost minikube]
	I1014 14:03:36.644166   31000 provision.go:177] copyRemoteCerts
	I1014 14:03:36.644234   31000 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 14:03:36.644262   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 14:03:36.646845   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:03:36.647215   31000 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 14:03:36.647246   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:03:36.647456   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 14:03:36.647627   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 14:03:36.647789   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 14:03:36.647926   31000 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 14:03:36.742330   31000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 14:03:36.742409   31000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 14:03:36.767821   31000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 14:03:36.767901   31000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1014 14:03:36.794645   31000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 14:03:36.794718   31000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 14:03:36.821537   31000 provision.go:87] duration metric: took 637.688114ms to configureAuth
	I1014 14:03:36.821564   31000 buildroot.go:189] setting minikube options for container-runtime
	I1014 14:03:36.821758   31000 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 14:03:36.821831   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 14:03:36.824462   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:03:36.824924   31000 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 14:03:36.824954   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:03:36.825135   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 14:03:36.825348   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 14:03:36.825518   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 14:03:36.825672   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 14:03:36.825829   31000 main.go:141] libmachine: Using SSH client type: native
	I1014 14:03:36.825994   31000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1014 14:03:36.826010   31000 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 14:05:07.582891   31000 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 14:05:07.582922   31000 machine.go:96] duration metric: took 1m31.793514791s to provisionDockerMachine
	I1014 14:05:07.582937   31000 start.go:293] postStartSetup for "ha-450021" (driver="kvm2")
	I1014 14:05:07.582950   31000 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 14:05:07.582972   31000 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 14:05:07.583258   31000 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 14:05:07.583282   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 14:05:07.586233   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:05:07.586789   31000 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 14:05:07.586826   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:05:07.586906   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 14:05:07.587088   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 14:05:07.587275   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 14:05:07.587426   31000 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 14:05:07.679121   31000 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 14:05:07.684291   31000 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 14:05:07.684321   31000 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 14:05:07.684387   31000 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 14:05:07.684459   31000 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 14:05:07.684469   31000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> /etc/ssl/certs/150232.pem
	I1014 14:05:07.684549   31000 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 14:05:07.694801   31000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 14:05:07.721497   31000 start.go:296] duration metric: took 138.544299ms for postStartSetup
	I1014 14:05:07.721536   31000 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 14:05:07.721849   31000 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1014 14:05:07.721874   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 14:05:07.724451   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:05:07.724800   31000 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 14:05:07.724822   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:05:07.725016   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 14:05:07.725182   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 14:05:07.725308   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 14:05:07.725473   31000 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	W1014 14:05:07.813814   31000 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1014 14:05:07.813844   31000 fix.go:56] duration metric: took 1m32.045134032s for fixHost
	I1014 14:05:07.813863   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 14:05:07.816622   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:05:07.816995   31000 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 14:05:07.817020   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:05:07.817183   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 14:05:07.817381   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 14:05:07.817508   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 14:05:07.817631   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 14:05:07.817770   31000 main.go:141] libmachine: Using SSH client type: native
	I1014 14:05:07.817940   31000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1014 14:05:07.817950   31000 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 14:05:07.931569   31000 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728914707.888734065
	
	I1014 14:05:07.931593   31000 fix.go:216] guest clock: 1728914707.888734065
	I1014 14:05:07.931602   31000 fix.go:229] Guest: 2024-10-14 14:05:07.888734065 +0000 UTC Remote: 2024-10-14 14:05:07.813851078 +0000 UTC m=+92.174581922 (delta=74.882987ms)
	I1014 14:05:07.931637   31000 fix.go:200] guest clock delta is within tolerance: 74.882987ms
	I1014 14:05:07.931646   31000 start.go:83] releasing machines lock for "ha-450021", held for 1m32.16294892s
	I1014 14:05:07.931678   31000 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 14:05:07.931951   31000 main.go:141] libmachine: (ha-450021) Calling .GetIP
	I1014 14:05:07.934216   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:05:07.934577   31000 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 14:05:07.934619   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:05:07.934732   31000 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 14:05:07.935193   31000 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 14:05:07.935367   31000 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 14:05:07.935446   31000 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 14:05:07.935500   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 14:05:07.935545   31000 ssh_runner.go:195] Run: cat /version.json
	I1014 14:05:07.935563   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 14:05:07.938001   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:05:07.938360   31000 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 14:05:07.938394   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:05:07.938466   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:05:07.938519   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 14:05:07.938676   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 14:05:07.938820   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 14:05:07.938954   31000 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 14:05:07.938971   31000 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 14:05:07.939018   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:05:07.939165   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 14:05:07.939282   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 14:05:07.939392   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 14:05:07.939524   31000 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 14:05:08.020339   31000 ssh_runner.go:195] Run: systemctl --version
	I1014 14:05:08.047314   31000 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 14:05:08.206493   31000 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 14:05:08.215545   31000 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 14:05:08.215607   31000 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 14:05:08.225258   31000 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 14:05:08.225288   31000 start.go:495] detecting cgroup driver to use...
	I1014 14:05:08.225355   31000 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 14:05:08.242319   31000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 14:05:08.256359   31000 docker.go:217] disabling cri-docker service (if available) ...
	I1014 14:05:08.256447   31000 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 14:05:08.269977   31000 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 14:05:08.284046   31000 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 14:05:08.432232   31000 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 14:05:08.578468   31000 docker.go:233] disabling docker service ...
	I1014 14:05:08.578547   31000 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 14:05:08.594711   31000 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 14:05:08.608564   31000 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 14:05:08.753019   31000 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 14:05:08.911980   31000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 14:05:08.925610   31000 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 14:05:08.945379   31000 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 14:05:08.945447   31000 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:05:08.969729   31000 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 14:05:08.969815   31000 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:05:08.995933   31000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:05:09.007941   31000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:05:09.018981   31000 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 14:05:09.030303   31000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:05:09.041177   31000 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:05:09.052685   31000 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:05:09.063935   31000 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 14:05:09.073959   31000 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 14:05:09.084502   31000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 14:05:09.235685   31000 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 14:05:10.159828   31000 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 14:05:10.159900   31000 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 14:05:10.164853   31000 start.go:563] Will wait 60s for crictl version
	I1014 14:05:10.164914   31000 ssh_runner.go:195] Run: which crictl
	I1014 14:05:10.168780   31000 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 14:05:10.207420   31000 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 14:05:10.207491   31000 ssh_runner.go:195] Run: crio --version
	I1014 14:05:10.238348   31000 ssh_runner.go:195] Run: crio --version
	I1014 14:05:10.270007   31000 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1014 14:05:10.271344   31000 main.go:141] libmachine: (ha-450021) Calling .GetIP
	I1014 14:05:10.273972   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:05:10.274329   31000 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 14:05:10.274354   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:05:10.274527   31000 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1014 14:05:10.279400   31000 kubeadm.go:883] updating cluster {Name:ha-450021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.127 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-stora
geclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 14:05:10.279546   31000 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 14:05:10.279593   31000 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 14:05:10.324390   31000 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 14:05:10.324415   31000 crio.go:433] Images already preloaded, skipping extraction
	I1014 14:05:10.324469   31000 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 14:05:10.357242   31000 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 14:05:10.357262   31000 cache_images.go:84] Images are preloaded, skipping loading
	I1014 14:05:10.357271   31000 kubeadm.go:934] updating node { 192.168.39.176 8443 v1.31.1 crio true true} ...
	I1014 14:05:10.357389   31000 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-450021 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.176
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 14:05:10.357474   31000 ssh_runner.go:195] Run: crio config
	I1014 14:05:10.405793   31000 cni.go:84] Creating CNI manager for ""
	I1014 14:05:10.405820   31000 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1014 14:05:10.405829   31000 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 14:05:10.405854   31000 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.176 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-450021 NodeName:ha-450021 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.176"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.176 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 14:05:10.405971   31000 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.176
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-450021"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.176"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.176"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 14:05:10.405993   31000 kube-vip.go:115] generating kube-vip config ...
	I1014 14:05:10.406033   31000 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1014 14:05:10.417704   31000 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1014 14:05:10.417808   31000 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1014 14:05:10.417864   31000 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 14:05:10.427628   31000 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 14:05:10.427698   31000 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1014 14:05:10.437373   31000 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1014 14:05:10.454606   31000 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 14:05:10.471910   31000 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1014 14:05:10.489667   31000 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1014 14:05:10.508129   31000 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1014 14:05:10.512722   31000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 14:05:10.664143   31000 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 14:05:10.679747   31000 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021 for IP: 192.168.39.176
	I1014 14:05:10.679766   31000 certs.go:194] generating shared ca certs ...
	I1014 14:05:10.679784   31000 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 14:05:10.679950   31000 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 14:05:10.680004   31000 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 14:05:10.680019   31000 certs.go:256] generating profile certs ...
	I1014 14:05:10.680114   31000 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.key
	I1014 14:05:10.680148   31000 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.22eec8a4
	I1014 14:05:10.680165   31000 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.22eec8a4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.176 192.168.39.89 192.168.39.55 192.168.39.254]
	I1014 14:05:10.825563   31000 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.22eec8a4 ...
	I1014 14:05:10.825596   31000 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.22eec8a4: {Name:mkcfbc98098c6aecb355a9c164bdef6c6768c1c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 14:05:10.825789   31000 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.22eec8a4 ...
	I1014 14:05:10.825805   31000 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.22eec8a4: {Name:mkc5cfa52ffbb125fc16bdba7b69d51ab972cad9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 14:05:10.825900   31000 certs.go:381] copying /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.22eec8a4 -> /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt
	I1014 14:05:10.826065   31000 certs.go:385] copying /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.22eec8a4 -> /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key
	I1014 14:05:10.826220   31000 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key
	I1014 14:05:10.826236   31000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 14:05:10.826252   31000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 14:05:10.826267   31000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 14:05:10.826287   31000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 14:05:10.826307   31000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 14:05:10.826326   31000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 14:05:10.826346   31000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 14:05:10.826363   31000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 14:05:10.826426   31000 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 14:05:10.826464   31000 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 14:05:10.826477   31000 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 14:05:10.826513   31000 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 14:05:10.826551   31000 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 14:05:10.826583   31000 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 14:05:10.826657   31000 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 14:05:10.826694   31000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem -> /usr/share/ca-certificates/15023.pem
	I1014 14:05:10.826782   31000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> /usr/share/ca-certificates/150232.pem
	I1014 14:05:10.826810   31000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 14:05:10.827374   31000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 14:05:10.853933   31000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 14:05:10.879688   31000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 14:05:10.904921   31000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 14:05:10.931560   31000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1014 14:05:10.957109   31000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 14:05:10.983063   31000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 14:05:11.012148   31000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 14:05:11.037543   31000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 14:05:11.063282   31000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 14:05:11.088424   31000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 14:05:11.112711   31000 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 14:05:11.130372   31000 ssh_runner.go:195] Run: openssl version
	I1014 14:05:11.136367   31000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 14:05:11.147324   31000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 14:05:11.151964   31000 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 14:05:11.152028   31000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 14:05:11.158243   31000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 14:05:11.168807   31000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 14:05:11.180535   31000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 14:05:11.186075   31000 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 14:05:11.186128   31000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 14:05:11.191961   31000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 14:05:11.201595   31000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 14:05:11.212781   31000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 14:05:11.217525   31000 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 14:05:11.217572   31000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 14:05:11.223525   31000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 14:05:11.232598   31000 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 14:05:11.237526   31000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 14:05:11.243196   31000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 14:05:11.248745   31000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 14:05:11.254197   31000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 14:05:11.259855   31000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 14:05:11.265357   31000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 14:05:11.270760   31000 kubeadm.go:392] StartCluster: {Name:ha-450021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.127 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storagec
lass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 14:05:11.270860   31000 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 14:05:11.270898   31000 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 14:05:11.309473   31000 cri.go:89] found id: "79dbfdd20bd5f4f1f0ceeb403aaa2b00d772fefec6569cac068f173cc8ce8946"
	I1014 14:05:11.309493   31000 cri.go:89] found id: "42c08eb5405bfaa9f4dbda0537a4bba6ea644a0856dee53a5e306c489b2a0101"
	I1014 14:05:11.309498   31000 cri.go:89] found id: "2fed31eb864aba539742e2a57181cb8356c39f61ce6c8bea61c63e89c364fd51"
	I1014 14:05:11.309502   31000 cri.go:89] found id: "7b7559a10d3142a57769693b5c224e5a3f2685c276af6ab642a96b361f9409ca"
	I1014 14:05:11.309506   31000 cri.go:89] found id: "138a0b23a09075071550a4b7808439fd0baef4054fc6a7a7d4e8bc9a4457abfe"
	I1014 14:05:11.309511   31000 cri.go:89] found id: "b17b6d38f935951dfa1746d02ec45095af8e06f6258ed80913feba7a10224927"
	I1014 14:05:11.309515   31000 cri.go:89] found id: "b15af89d835eebb58d825b5cdfdcbcfc064fe27d95caa6667adfb0e714974996"
	I1014 14:05:11.309519   31000 cri.go:89] found id: "5eec863af38c114b5058f678da27f8ce8608a5cd97566d4e704e07ff87100124"
	I1014 14:05:11.309523   31000 cri.go:89] found id: "69f6cdf690df6514a349ce87c438a718209e9a098486e719653e5ac84d645899"
	I1014 14:05:11.309533   31000 cri.go:89] found id: "09fbfff3b334bde93db2f81855492434f8be70767826f2e33734ab52ad522a7a"
	I1014 14:05:11.309536   31000 cri.go:89] found id: "4efae268f9ec331abbf180a9264d60144b2a22485b89d39a46207f1c40454221"
	I1014 14:05:11.309541   31000 cri.go:89] found id: "6ebec97dfd405a7e2c8ad77d0255ca029054cfb1090eba8d4d3851bdb68213e1"
	I1014 14:05:11.309545   31000 cri.go:89] found id: "942c179e591a9c0a8a1d869cfc5456dcbfb37c78056f256b241c51aab8936a3e"
	I1014 14:05:11.309552   31000 cri.go:89] found id: ""
	I1014 14:05:11.309596   31000 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-450021 -n ha-450021
helpers_test.go:261: (dbg) Run:  kubectl --context ha-450021 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (416.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (142.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-450021 stop -v=7 --alsologtostderr: exit status 82 (2m0.457243075s)

                                                
                                                
-- stdout --
	* Stopping node "ha-450021-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 14:08:47.867733   32978 out.go:345] Setting OutFile to fd 1 ...
	I1014 14:08:47.867875   32978 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:08:47.867885   32978 out.go:358] Setting ErrFile to fd 2...
	I1014 14:08:47.867891   32978 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:08:47.868065   32978 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
	I1014 14:08:47.868307   32978 out.go:352] Setting JSON to false
	I1014 14:08:47.868397   32978 mustload.go:65] Loading cluster: ha-450021
	I1014 14:08:47.868788   32978 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 14:08:47.868883   32978 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/config.json ...
	I1014 14:08:47.869069   32978 mustload.go:65] Loading cluster: ha-450021
	I1014 14:08:47.869225   32978 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 14:08:47.869265   32978 stop.go:39] StopHost: ha-450021-m04
	I1014 14:08:47.869646   32978 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:08:47.869701   32978 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:08:47.884126   32978 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37111
	I1014 14:08:47.884673   32978 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:08:47.885303   32978 main.go:141] libmachine: Using API Version  1
	I1014 14:08:47.885328   32978 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:08:47.885696   32978 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:08:47.887926   32978 out.go:177] * Stopping node "ha-450021-m04"  ...
	I1014 14:08:47.889386   32978 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1014 14:08:47.889422   32978 main.go:141] libmachine: (ha-450021-m04) Calling .DriverName
	I1014 14:08:47.889658   32978 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1014 14:08:47.889691   32978 main.go:141] libmachine: (ha-450021-m04) Calling .GetSSHHostname
	I1014 14:08:47.892610   32978 main.go:141] libmachine: (ha-450021-m04) DBG | domain ha-450021-m04 has defined MAC address 52:54:00:89:83:25 in network mk-ha-450021
	I1014 14:08:47.893035   32978 main.go:141] libmachine: (ha-450021-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:83:25", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 15:08:14 +0000 UTC Type:0 Mac:52:54:00:89:83:25 Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ha-450021-m04 Clientid:01:52:54:00:89:83:25}
	I1014 14:08:47.893072   32978 main.go:141] libmachine: (ha-450021-m04) DBG | domain ha-450021-m04 has defined IP address 192.168.39.127 and MAC address 52:54:00:89:83:25 in network mk-ha-450021
	I1014 14:08:47.893192   32978 main.go:141] libmachine: (ha-450021-m04) Calling .GetSSHPort
	I1014 14:08:47.893386   32978 main.go:141] libmachine: (ha-450021-m04) Calling .GetSSHKeyPath
	I1014 14:08:47.893547   32978 main.go:141] libmachine: (ha-450021-m04) Calling .GetSSHUsername
	I1014 14:08:47.893679   32978 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021-m04/id_rsa Username:docker}
	I1014 14:08:47.977583   32978 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1014 14:08:48.030326   32978 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1014 14:08:48.082984   32978 main.go:141] libmachine: Stopping "ha-450021-m04"...
	I1014 14:08:48.083021   32978 main.go:141] libmachine: (ha-450021-m04) Calling .GetState
	I1014 14:08:48.084509   32978 main.go:141] libmachine: (ha-450021-m04) Calling .Stop
	I1014 14:08:48.088105   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 0/120
	I1014 14:08:49.089393   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 1/120
	I1014 14:08:50.090800   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 2/120
	I1014 14:08:51.092277   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 3/120
	I1014 14:08:52.093657   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 4/120
	I1014 14:08:53.095546   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 5/120
	I1014 14:08:54.097123   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 6/120
	I1014 14:08:55.098301   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 7/120
	I1014 14:08:56.099661   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 8/120
	I1014 14:08:57.101145   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 9/120
	I1014 14:08:58.103355   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 10/120
	I1014 14:08:59.104693   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 11/120
	I1014 14:09:00.105999   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 12/120
	I1014 14:09:01.107388   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 13/120
	I1014 14:09:02.108724   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 14/120
	I1014 14:09:03.110870   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 15/120
	I1014 14:09:04.112096   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 16/120
	I1014 14:09:05.113486   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 17/120
	I1014 14:09:06.114854   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 18/120
	I1014 14:09:07.116073   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 19/120
	I1014 14:09:08.118230   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 20/120
	I1014 14:09:09.119579   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 21/120
	I1014 14:09:10.120997   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 22/120
	I1014 14:09:11.122286   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 23/120
	I1014 14:09:12.123613   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 24/120
	I1014 14:09:13.124915   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 25/120
	I1014 14:09:14.126353   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 26/120
	I1014 14:09:15.127430   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 27/120
	I1014 14:09:16.129431   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 28/120
	I1014 14:09:17.130657   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 29/120
	I1014 14:09:18.132899   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 30/120
	I1014 14:09:19.134194   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 31/120
	I1014 14:09:20.136138   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 32/120
	I1014 14:09:21.137388   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 33/120
	I1014 14:09:22.138850   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 34/120
	I1014 14:09:23.141006   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 35/120
	I1014 14:09:24.142294   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 36/120
	I1014 14:09:25.143731   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 37/120
	I1014 14:09:26.145135   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 38/120
	I1014 14:09:27.146537   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 39/120
	I1014 14:09:28.148678   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 40/120
	I1014 14:09:29.150122   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 41/120
	I1014 14:09:30.151569   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 42/120
	I1014 14:09:31.152925   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 43/120
	I1014 14:09:32.154233   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 44/120
	I1014 14:09:33.155446   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 45/120
	I1014 14:09:34.156757   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 46/120
	I1014 14:09:35.158126   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 47/120
	I1014 14:09:36.159585   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 48/120
	I1014 14:09:37.160769   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 49/120
	I1014 14:09:38.162892   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 50/120
	I1014 14:09:39.165311   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 51/120
	I1014 14:09:40.166726   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 52/120
	I1014 14:09:41.169245   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 53/120
	I1014 14:09:42.170497   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 54/120
	I1014 14:09:43.171672   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 55/120
	I1014 14:09:44.173062   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 56/120
	I1014 14:09:45.174192   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 57/120
	I1014 14:09:46.175530   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 58/120
	I1014 14:09:47.177121   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 59/120
	I1014 14:09:48.178948   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 60/120
	I1014 14:09:49.180856   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 61/120
	I1014 14:09:50.182192   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 62/120
	I1014 14:09:51.183518   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 63/120
	I1014 14:09:52.184642   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 64/120
	I1014 14:09:53.186610   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 65/120
	I1014 14:09:54.188045   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 66/120
	I1014 14:09:55.189391   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 67/120
	I1014 14:09:56.190861   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 68/120
	I1014 14:09:57.192320   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 69/120
	I1014 14:09:58.193929   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 70/120
	I1014 14:09:59.195329   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 71/120
	I1014 14:10:00.196934   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 72/120
	I1014 14:10:01.198372   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 73/120
	I1014 14:10:02.199891   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 74/120
	I1014 14:10:03.201969   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 75/120
	I1014 14:10:04.203594   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 76/120
	I1014 14:10:05.205382   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 77/120
	I1014 14:10:06.206771   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 78/120
	I1014 14:10:07.209437   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 79/120
	I1014 14:10:08.211704   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 80/120
	I1014 14:10:09.213092   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 81/120
	I1014 14:10:10.214388   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 82/120
	I1014 14:10:11.215785   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 83/120
	I1014 14:10:12.217304   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 84/120
	I1014 14:10:13.219418   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 85/120
	I1014 14:10:14.220832   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 86/120
	I1014 14:10:15.222124   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 87/120
	I1014 14:10:16.223576   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 88/120
	I1014 14:10:17.225637   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 89/120
	I1014 14:10:18.227811   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 90/120
	I1014 14:10:19.229093   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 91/120
	I1014 14:10:20.230517   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 92/120
	I1014 14:10:21.231746   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 93/120
	I1014 14:10:22.233196   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 94/120
	I1014 14:10:23.234681   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 95/120
	I1014 14:10:24.236328   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 96/120
	I1014 14:10:25.237691   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 97/120
	I1014 14:10:26.238945   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 98/120
	I1014 14:10:27.240231   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 99/120
	I1014 14:10:28.242422   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 100/120
	I1014 14:10:29.243742   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 101/120
	I1014 14:10:30.245113   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 102/120
	I1014 14:10:31.246592   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 103/120
	I1014 14:10:32.247941   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 104/120
	I1014 14:10:33.249662   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 105/120
	I1014 14:10:34.250965   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 106/120
	I1014 14:10:35.252214   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 107/120
	I1014 14:10:36.253547   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 108/120
	I1014 14:10:37.254859   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 109/120
	I1014 14:10:38.256170   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 110/120
	I1014 14:10:39.258152   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 111/120
	I1014 14:10:40.259418   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 112/120
	I1014 14:10:41.261094   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 113/120
	I1014 14:10:42.263046   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 114/120
	I1014 14:10:43.264826   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 115/120
	I1014 14:10:44.266415   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 116/120
	I1014 14:10:45.267571   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 117/120
	I1014 14:10:46.268932   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 118/120
	I1014 14:10:47.270354   32978 main.go:141] libmachine: (ha-450021-m04) Waiting for machine to stop 119/120
	I1014 14:10:48.271656   32978 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1014 14:10:48.271726   32978 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1014 14:10:48.273689   32978 out.go:201] 
	W1014 14:10:48.275064   32978 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1014 14:10:48.275085   32978 out.go:270] * 
	* 
	W1014 14:10:48.277910   32978 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 14:10:48.279334   32978 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:535: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-450021 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 status -v=7 --alsologtostderr
E1014 14:11:06.401585   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:539: (dbg) Done: out/minikube-linux-amd64 -p ha-450021 status -v=7 --alsologtostderr: (18.910189951s)
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-450021 status -v=7 --alsologtostderr": 
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-450021 status -v=7 --alsologtostderr": 
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-450021 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-450021 -n ha-450021
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-450021 logs -n 25: (2.133401019s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-450021 ssh -n ha-450021-m02 sudo cat                                          | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /home/docker/cp-test_ha-450021-m03_ha-450021-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-450021 cp ha-450021-m03:/home/docker/cp-test.txt                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04:/home/docker/cp-test_ha-450021-m03_ha-450021-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n ha-450021-m04 sudo cat                                          | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /home/docker/cp-test_ha-450021-m03_ha-450021-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-450021 cp testdata/cp-test.txt                                                | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-450021 cp ha-450021-m04:/home/docker/cp-test.txt                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3029314565/001/cp-test_ha-450021-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-450021 cp ha-450021-m04:/home/docker/cp-test.txt                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021:/home/docker/cp-test_ha-450021-m04_ha-450021.txt                       |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n ha-450021 sudo cat                                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /home/docker/cp-test_ha-450021-m04_ha-450021.txt                                 |           |         |         |                     |                     |
	| cp      | ha-450021 cp ha-450021-m04:/home/docker/cp-test.txt                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m02:/home/docker/cp-test_ha-450021-m04_ha-450021-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n ha-450021-m02 sudo cat                                          | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /home/docker/cp-test_ha-450021-m04_ha-450021-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-450021 cp ha-450021-m04:/home/docker/cp-test.txt                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m03:/home/docker/cp-test_ha-450021-m04_ha-450021-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n                                                                 | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | ha-450021-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-450021 ssh -n ha-450021-m03 sudo cat                                          | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC | 14 Oct 24 13:58 UTC |
	|         | /home/docker/cp-test_ha-450021-m04_ha-450021-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-450021 node stop m02 -v=7                                                     | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 13:58 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-450021 node start m02 -v=7                                                    | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 14:01 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-450021 -v=7                                                           | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 14:01 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-450021 -v=7                                                                | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 14:01 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-450021 --wait=true -v=7                                                    | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 14:03 UTC | 14 Oct 24 14:08 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-450021                                                                | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 14:08 UTC |                     |
	| node    | ha-450021 node delete m03 -v=7                                                   | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 14:08 UTC | 14 Oct 24 14:08 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-450021 stop -v=7                                                              | ha-450021 | jenkins | v1.34.0 | 14 Oct 24 14:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 14:03:35
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 14:03:35.675229   31000 out.go:345] Setting OutFile to fd 1 ...
	I1014 14:03:35.675447   31000 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:03:35.675455   31000 out.go:358] Setting ErrFile to fd 2...
	I1014 14:03:35.675459   31000 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:03:35.675660   31000 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
	I1014 14:03:35.676174   31000 out.go:352] Setting JSON to false
	I1014 14:03:35.677032   31000 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2766,"bootTime":1728911850,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 14:03:35.677136   31000 start.go:139] virtualization: kvm guest
	I1014 14:03:35.682503   31000 out.go:177] * [ha-450021] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1014 14:03:35.683954   31000 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 14:03:35.683957   31000 notify.go:220] Checking for updates...
	I1014 14:03:35.685800   31000 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 14:03:35.687186   31000 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 14:03:35.688488   31000 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 14:03:35.689719   31000 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 14:03:35.690884   31000 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 14:03:35.692618   31000 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 14:03:35.692727   31000 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 14:03:35.693178   31000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:03:35.693216   31000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:03:35.708628   31000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32775
	I1014 14:03:35.709179   31000 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:03:35.709787   31000 main.go:141] libmachine: Using API Version  1
	I1014 14:03:35.709807   31000 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:03:35.710211   31000 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:03:35.710398   31000 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 14:03:35.745574   31000 out.go:177] * Using the kvm2 driver based on existing profile
	I1014 14:03:35.746814   31000 start.go:297] selected driver: kvm2
	I1014 14:03:35.746827   31000 start.go:901] validating driver "kvm2" against &{Name:ha-450021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.1 ClusterName:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.127 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false de
fault-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 14:03:35.746978   31000 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 14:03:35.747295   31000 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 14:03:35.747369   31000 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19790-7836/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 14:03:35.763552   31000 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1014 14:03:35.764664   31000 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 14:03:35.764713   31000 cni.go:84] Creating CNI manager for ""
	I1014 14:03:35.764800   31000 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1014 14:03:35.764878   31000 start.go:340] cluster config:
	{Name:ha-450021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-450021 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.127 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:fa
lse headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 14:03:35.765096   31000 iso.go:125] acquiring lock: {Name:mk2e2e780b05ead4007a93e6b56d28c6081926bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 14:03:35.766942   31000 out.go:177] * Starting "ha-450021" primary control-plane node in "ha-450021" cluster
	I1014 14:03:35.768174   31000 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 14:03:35.768217   31000 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1014 14:03:35.768225   31000 cache.go:56] Caching tarball of preloaded images
	I1014 14:03:35.768312   31000 preload.go:172] Found /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 14:03:35.768322   31000 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1014 14:03:35.768450   31000 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/config.json ...
	I1014 14:03:35.768649   31000 start.go:360] acquireMachinesLock for ha-450021: {Name:mk0b928e3a7c606d1470b2064477a22c5e59969d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 14:03:35.768690   31000 start.go:364] duration metric: took 22.827µs to acquireMachinesLock for "ha-450021"
	I1014 14:03:35.768701   31000 start.go:96] Skipping create...Using existing machine configuration
	I1014 14:03:35.768711   31000 fix.go:54] fixHost starting: 
	I1014 14:03:35.768954   31000 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:03:35.768991   31000 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:03:35.783295   31000 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42113
	I1014 14:03:35.783727   31000 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:03:35.784190   31000 main.go:141] libmachine: Using API Version  1
	I1014 14:03:35.784212   31000 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:03:35.784520   31000 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:03:35.784725   31000 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 14:03:35.784868   31000 main.go:141] libmachine: (ha-450021) Calling .GetState
	I1014 14:03:35.786329   31000 fix.go:112] recreateIfNeeded on ha-450021: state=Running err=<nil>
	W1014 14:03:35.786356   31000 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 14:03:35.788248   31000 out.go:177] * Updating the running kvm2 "ha-450021" VM ...
	I1014 14:03:35.789392   31000 machine.go:93] provisionDockerMachine start ...
	I1014 14:03:35.789411   31000 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 14:03:35.789585   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 14:03:35.792166   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:03:35.792590   31000 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 14:03:35.792607   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:03:35.792784   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 14:03:35.792924   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 14:03:35.793081   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 14:03:35.793234   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 14:03:35.793389   31000 main.go:141] libmachine: Using SSH client type: native
	I1014 14:03:35.793582   31000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1014 14:03:35.793595   31000 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 14:03:35.924005   31000 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-450021
	
	I1014 14:03:35.924032   31000 main.go:141] libmachine: (ha-450021) Calling .GetMachineName
	I1014 14:03:35.924265   31000 buildroot.go:166] provisioning hostname "ha-450021"
	I1014 14:03:35.924285   31000 main.go:141] libmachine: (ha-450021) Calling .GetMachineName
	I1014 14:03:35.924481   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 14:03:35.926901   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:03:35.927256   31000 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 14:03:35.927282   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:03:35.927425   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 14:03:35.927600   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 14:03:35.927760   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 14:03:35.927899   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 14:03:35.928056   31000 main.go:141] libmachine: Using SSH client type: native
	I1014 14:03:35.928220   31000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1014 14:03:35.928230   31000 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-450021 && echo "ha-450021" | sudo tee /etc/hostname
	I1014 14:03:36.060224   31000 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-450021
	
	I1014 14:03:36.060249   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 14:03:36.062711   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:03:36.063022   31000 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 14:03:36.063046   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:03:36.063244   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 14:03:36.063447   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 14:03:36.063598   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 14:03:36.063713   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 14:03:36.063886   31000 main.go:141] libmachine: Using SSH client type: native
	I1014 14:03:36.064088   31000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1014 14:03:36.064105   31000 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-450021' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-450021/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-450021' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 14:03:36.183775   31000 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 14:03:36.183807   31000 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 14:03:36.183824   31000 buildroot.go:174] setting up certificates
	I1014 14:03:36.183831   31000 provision.go:84] configureAuth start
	I1014 14:03:36.183844   31000 main.go:141] libmachine: (ha-450021) Calling .GetMachineName
	I1014 14:03:36.184133   31000 main.go:141] libmachine: (ha-450021) Calling .GetIP
	I1014 14:03:36.186458   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:03:36.186809   31000 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 14:03:36.186835   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:03:36.186957   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 14:03:36.189094   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:03:36.189486   31000 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 14:03:36.189511   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:03:36.189668   31000 provision.go:143] copyHostCerts
	I1014 14:03:36.189693   31000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 14:03:36.189723   31000 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 14:03:36.189740   31000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 14:03:36.189805   31000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 14:03:36.189897   31000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 14:03:36.189936   31000 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 14:03:36.189943   31000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 14:03:36.189969   31000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 14:03:36.190025   31000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 14:03:36.190042   31000 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 14:03:36.190045   31000 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 14:03:36.190066   31000 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 14:03:36.190128   31000 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.ha-450021 san=[127.0.0.1 192.168.39.176 ha-450021 localhost minikube]
	I1014 14:03:36.644166   31000 provision.go:177] copyRemoteCerts
	I1014 14:03:36.644234   31000 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 14:03:36.644262   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 14:03:36.646845   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:03:36.647215   31000 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 14:03:36.647246   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:03:36.647456   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 14:03:36.647627   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 14:03:36.647789   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 14:03:36.647926   31000 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 14:03:36.742330   31000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 14:03:36.742409   31000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 14:03:36.767821   31000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 14:03:36.767901   31000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1014 14:03:36.794645   31000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 14:03:36.794718   31000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 14:03:36.821537   31000 provision.go:87] duration metric: took 637.688114ms to configureAuth
	I1014 14:03:36.821564   31000 buildroot.go:189] setting minikube options for container-runtime
	I1014 14:03:36.821758   31000 config.go:182] Loaded profile config "ha-450021": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 14:03:36.821831   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 14:03:36.824462   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:03:36.824924   31000 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 14:03:36.824954   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:03:36.825135   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 14:03:36.825348   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 14:03:36.825518   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 14:03:36.825672   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 14:03:36.825829   31000 main.go:141] libmachine: Using SSH client type: native
	I1014 14:03:36.825994   31000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1014 14:03:36.826010   31000 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 14:05:07.582891   31000 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 14:05:07.582922   31000 machine.go:96] duration metric: took 1m31.793514791s to provisionDockerMachine
	I1014 14:05:07.582937   31000 start.go:293] postStartSetup for "ha-450021" (driver="kvm2")
	I1014 14:05:07.582950   31000 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 14:05:07.582972   31000 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 14:05:07.583258   31000 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 14:05:07.583282   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 14:05:07.586233   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:05:07.586789   31000 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 14:05:07.586826   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:05:07.586906   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 14:05:07.587088   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 14:05:07.587275   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 14:05:07.587426   31000 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 14:05:07.679121   31000 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 14:05:07.684291   31000 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 14:05:07.684321   31000 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 14:05:07.684387   31000 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 14:05:07.684459   31000 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 14:05:07.684469   31000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> /etc/ssl/certs/150232.pem
	I1014 14:05:07.684549   31000 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 14:05:07.694801   31000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 14:05:07.721497   31000 start.go:296] duration metric: took 138.544299ms for postStartSetup
	I1014 14:05:07.721536   31000 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 14:05:07.721849   31000 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1014 14:05:07.721874   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 14:05:07.724451   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:05:07.724800   31000 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 14:05:07.724822   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:05:07.725016   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 14:05:07.725182   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 14:05:07.725308   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 14:05:07.725473   31000 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	W1014 14:05:07.813814   31000 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1014 14:05:07.813844   31000 fix.go:56] duration metric: took 1m32.045134032s for fixHost
	I1014 14:05:07.813863   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 14:05:07.816622   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:05:07.816995   31000 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 14:05:07.817020   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:05:07.817183   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 14:05:07.817381   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 14:05:07.817508   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 14:05:07.817631   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 14:05:07.817770   31000 main.go:141] libmachine: Using SSH client type: native
	I1014 14:05:07.817940   31000 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.176 22 <nil> <nil>}
	I1014 14:05:07.817950   31000 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 14:05:07.931569   31000 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728914707.888734065
	
	I1014 14:05:07.931593   31000 fix.go:216] guest clock: 1728914707.888734065
	I1014 14:05:07.931602   31000 fix.go:229] Guest: 2024-10-14 14:05:07.888734065 +0000 UTC Remote: 2024-10-14 14:05:07.813851078 +0000 UTC m=+92.174581922 (delta=74.882987ms)
	I1014 14:05:07.931637   31000 fix.go:200] guest clock delta is within tolerance: 74.882987ms
	I1014 14:05:07.931646   31000 start.go:83] releasing machines lock for "ha-450021", held for 1m32.16294892s
	I1014 14:05:07.931678   31000 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 14:05:07.931951   31000 main.go:141] libmachine: (ha-450021) Calling .GetIP
	I1014 14:05:07.934216   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:05:07.934577   31000 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 14:05:07.934619   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:05:07.934732   31000 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 14:05:07.935193   31000 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 14:05:07.935367   31000 main.go:141] libmachine: (ha-450021) Calling .DriverName
	I1014 14:05:07.935446   31000 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 14:05:07.935500   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 14:05:07.935545   31000 ssh_runner.go:195] Run: cat /version.json
	I1014 14:05:07.935563   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHHostname
	I1014 14:05:07.938001   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:05:07.938360   31000 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 14:05:07.938394   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:05:07.938466   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:05:07.938519   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 14:05:07.938676   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 14:05:07.938820   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 14:05:07.938954   31000 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 14:05:07.938971   31000 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 14:05:07.939018   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:05:07.939165   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHPort
	I1014 14:05:07.939282   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHKeyPath
	I1014 14:05:07.939392   31000 main.go:141] libmachine: (ha-450021) Calling .GetSSHUsername
	I1014 14:05:07.939524   31000 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/ha-450021/id_rsa Username:docker}
	I1014 14:05:08.020339   31000 ssh_runner.go:195] Run: systemctl --version
	I1014 14:05:08.047314   31000 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 14:05:08.206493   31000 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 14:05:08.215545   31000 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 14:05:08.215607   31000 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 14:05:08.225258   31000 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 14:05:08.225288   31000 start.go:495] detecting cgroup driver to use...
	I1014 14:05:08.225355   31000 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 14:05:08.242319   31000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 14:05:08.256359   31000 docker.go:217] disabling cri-docker service (if available) ...
	I1014 14:05:08.256447   31000 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 14:05:08.269977   31000 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 14:05:08.284046   31000 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 14:05:08.432232   31000 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 14:05:08.578468   31000 docker.go:233] disabling docker service ...
	I1014 14:05:08.578547   31000 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 14:05:08.594711   31000 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 14:05:08.608564   31000 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 14:05:08.753019   31000 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 14:05:08.911980   31000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 14:05:08.925610   31000 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 14:05:08.945379   31000 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 14:05:08.945447   31000 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:05:08.969729   31000 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 14:05:08.969815   31000 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:05:08.995933   31000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:05:09.007941   31000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:05:09.018981   31000 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 14:05:09.030303   31000 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:05:09.041177   31000 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:05:09.052685   31000 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:05:09.063935   31000 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 14:05:09.073959   31000 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 14:05:09.084502   31000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 14:05:09.235685   31000 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 14:05:10.159828   31000 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 14:05:10.159900   31000 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 14:05:10.164853   31000 start.go:563] Will wait 60s for crictl version
	I1014 14:05:10.164914   31000 ssh_runner.go:195] Run: which crictl
	I1014 14:05:10.168780   31000 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 14:05:10.207420   31000 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 14:05:10.207491   31000 ssh_runner.go:195] Run: crio --version
	I1014 14:05:10.238348   31000 ssh_runner.go:195] Run: crio --version
	I1014 14:05:10.270007   31000 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1014 14:05:10.271344   31000 main.go:141] libmachine: (ha-450021) Calling .GetIP
	I1014 14:05:10.273972   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:05:10.274329   31000 main.go:141] libmachine: (ha-450021) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a1:20:5f", ip: ""} in network mk-ha-450021: {Iface:virbr1 ExpiryTime:2024-10-14 14:54:34 +0000 UTC Type:0 Mac:52:54:00:a1:20:5f Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:ha-450021 Clientid:01:52:54:00:a1:20:5f}
	I1014 14:05:10.274354   31000 main.go:141] libmachine: (ha-450021) DBG | domain ha-450021 has defined IP address 192.168.39.176 and MAC address 52:54:00:a1:20:5f in network mk-ha-450021
	I1014 14:05:10.274527   31000 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1014 14:05:10.279400   31000 kubeadm.go:883] updating cluster {Name:ha-450021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Cl
usterName:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.127 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-stora
geclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 14:05:10.279546   31000 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 14:05:10.279593   31000 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 14:05:10.324390   31000 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 14:05:10.324415   31000 crio.go:433] Images already preloaded, skipping extraction
	I1014 14:05:10.324469   31000 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 14:05:10.357242   31000 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 14:05:10.357262   31000 cache_images.go:84] Images are preloaded, skipping loading
	I1014 14:05:10.357271   31000 kubeadm.go:934] updating node { 192.168.39.176 8443 v1.31.1 crio true true} ...
	I1014 14:05:10.357389   31000 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-450021 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.176
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 14:05:10.357474   31000 ssh_runner.go:195] Run: crio config
	I1014 14:05:10.405793   31000 cni.go:84] Creating CNI manager for ""
	I1014 14:05:10.405820   31000 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1014 14:05:10.405829   31000 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 14:05:10.405854   31000 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.176 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-450021 NodeName:ha-450021 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.176"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.176 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 14:05:10.405971   31000 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.176
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-450021"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.176"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.176"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 14:05:10.405993   31000 kube-vip.go:115] generating kube-vip config ...
	I1014 14:05:10.406033   31000 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1014 14:05:10.417704   31000 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1014 14:05:10.417808   31000 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.3
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1014 14:05:10.417864   31000 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 14:05:10.427628   31000 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 14:05:10.427698   31000 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1014 14:05:10.437373   31000 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1014 14:05:10.454606   31000 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 14:05:10.471910   31000 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1014 14:05:10.489667   31000 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1014 14:05:10.508129   31000 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1014 14:05:10.512722   31000 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 14:05:10.664143   31000 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 14:05:10.679747   31000 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021 for IP: 192.168.39.176
	I1014 14:05:10.679766   31000 certs.go:194] generating shared ca certs ...
	I1014 14:05:10.679784   31000 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 14:05:10.679950   31000 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 14:05:10.680004   31000 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 14:05:10.680019   31000 certs.go:256] generating profile certs ...
	I1014 14:05:10.680114   31000 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/client.key
	I1014 14:05:10.680148   31000 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.22eec8a4
	I1014 14:05:10.680165   31000 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.22eec8a4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.176 192.168.39.89 192.168.39.55 192.168.39.254]
	I1014 14:05:10.825563   31000 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.22eec8a4 ...
	I1014 14:05:10.825596   31000 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.22eec8a4: {Name:mkcfbc98098c6aecb355a9c164bdef6c6768c1c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 14:05:10.825789   31000 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.22eec8a4 ...
	I1014 14:05:10.825805   31000 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.22eec8a4: {Name:mkc5cfa52ffbb125fc16bdba7b69d51ab972cad9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 14:05:10.825900   31000 certs.go:381] copying /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt.22eec8a4 -> /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt
	I1014 14:05:10.826065   31000 certs.go:385] copying /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key.22eec8a4 -> /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key
	I1014 14:05:10.826220   31000 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key
	I1014 14:05:10.826236   31000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 14:05:10.826252   31000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 14:05:10.826267   31000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 14:05:10.826287   31000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 14:05:10.826307   31000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 14:05:10.826326   31000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 14:05:10.826346   31000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 14:05:10.826363   31000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 14:05:10.826426   31000 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 14:05:10.826464   31000 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 14:05:10.826477   31000 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 14:05:10.826513   31000 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 14:05:10.826551   31000 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 14:05:10.826583   31000 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 14:05:10.826657   31000 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 14:05:10.826694   31000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem -> /usr/share/ca-certificates/15023.pem
	I1014 14:05:10.826782   31000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> /usr/share/ca-certificates/150232.pem
	I1014 14:05:10.826810   31000 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 14:05:10.827374   31000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 14:05:10.853933   31000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 14:05:10.879688   31000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 14:05:10.904921   31000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 14:05:10.931560   31000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1014 14:05:10.957109   31000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 14:05:10.983063   31000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 14:05:11.012148   31000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/ha-450021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 14:05:11.037543   31000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 14:05:11.063282   31000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 14:05:11.088424   31000 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 14:05:11.112711   31000 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 14:05:11.130372   31000 ssh_runner.go:195] Run: openssl version
	I1014 14:05:11.136367   31000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 14:05:11.147324   31000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 14:05:11.151964   31000 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 14:05:11.152028   31000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 14:05:11.158243   31000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 14:05:11.168807   31000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 14:05:11.180535   31000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 14:05:11.186075   31000 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 14:05:11.186128   31000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 14:05:11.191961   31000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 14:05:11.201595   31000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 14:05:11.212781   31000 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 14:05:11.217525   31000 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 14:05:11.217572   31000 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 14:05:11.223525   31000 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 14:05:11.232598   31000 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 14:05:11.237526   31000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 14:05:11.243196   31000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 14:05:11.248745   31000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 14:05:11.254197   31000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 14:05:11.259855   31000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 14:05:11.265357   31000 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 14:05:11.270760   31000 kubeadm.go:392] StartCluster: {Name:ha-450021 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 Clust
erName:ha-450021 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.89 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.55 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.127 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storagec
lass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 14:05:11.270860   31000 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 14:05:11.270898   31000 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 14:05:11.309473   31000 cri.go:89] found id: "79dbfdd20bd5f4f1f0ceeb403aaa2b00d772fefec6569cac068f173cc8ce8946"
	I1014 14:05:11.309493   31000 cri.go:89] found id: "42c08eb5405bfaa9f4dbda0537a4bba6ea644a0856dee53a5e306c489b2a0101"
	I1014 14:05:11.309498   31000 cri.go:89] found id: "2fed31eb864aba539742e2a57181cb8356c39f61ce6c8bea61c63e89c364fd51"
	I1014 14:05:11.309502   31000 cri.go:89] found id: "7b7559a10d3142a57769693b5c224e5a3f2685c276af6ab642a96b361f9409ca"
	I1014 14:05:11.309506   31000 cri.go:89] found id: "138a0b23a09075071550a4b7808439fd0baef4054fc6a7a7d4e8bc9a4457abfe"
	I1014 14:05:11.309511   31000 cri.go:89] found id: "b17b6d38f935951dfa1746d02ec45095af8e06f6258ed80913feba7a10224927"
	I1014 14:05:11.309515   31000 cri.go:89] found id: "b15af89d835eebb58d825b5cdfdcbcfc064fe27d95caa6667adfb0e714974996"
	I1014 14:05:11.309519   31000 cri.go:89] found id: "5eec863af38c114b5058f678da27f8ce8608a5cd97566d4e704e07ff87100124"
	I1014 14:05:11.309523   31000 cri.go:89] found id: "69f6cdf690df6514a349ce87c438a718209e9a098486e719653e5ac84d645899"
	I1014 14:05:11.309533   31000 cri.go:89] found id: "09fbfff3b334bde93db2f81855492434f8be70767826f2e33734ab52ad522a7a"
	I1014 14:05:11.309536   31000 cri.go:89] found id: "4efae268f9ec331abbf180a9264d60144b2a22485b89d39a46207f1c40454221"
	I1014 14:05:11.309541   31000 cri.go:89] found id: "6ebec97dfd405a7e2c8ad77d0255ca029054cfb1090eba8d4d3851bdb68213e1"
	I1014 14:05:11.309545   31000 cri.go:89] found id: "942c179e591a9c0a8a1d869cfc5456dcbfb37c78056f256b241c51aab8936a3e"
	I1014 14:05:11.309552   31000 cri.go:89] found id: ""
	I1014 14:05:11.309596   31000 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-450021 -n ha-450021
helpers_test.go:261: (dbg) Run:  kubectl --context ha-450021 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (142.07s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (326.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-740856
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-740856
E1014 14:26:06.401586   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-740856: exit status 82 (2m1.917163606s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-740856-m03"  ...
	* Stopping node "multinode-740856-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-740856" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-740856 --wait=true -v=8 --alsologtostderr
E1014 14:28:36.993876   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/functional-917108/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:31:06.401307   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-740856 --wait=true -v=8 --alsologtostderr: (3m21.472346798s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-740856
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-740856 -n multinode-740856
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-740856 logs -n 25: (2.089697495s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-740856 ssh -n                                                                 | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | multinode-740856-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-740856 cp multinode-740856-m02:/home/docker/cp-test.txt                       | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1440328619/001/cp-test_multinode-740856-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-740856 ssh -n                                                                 | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | multinode-740856-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-740856 cp multinode-740856-m02:/home/docker/cp-test.txt                       | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | multinode-740856:/home/docker/cp-test_multinode-740856-m02_multinode-740856.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-740856 ssh -n                                                                 | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | multinode-740856-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-740856 ssh -n multinode-740856 sudo cat                                       | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | /home/docker/cp-test_multinode-740856-m02_multinode-740856.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-740856 cp multinode-740856-m02:/home/docker/cp-test.txt                       | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | multinode-740856-m03:/home/docker/cp-test_multinode-740856-m02_multinode-740856-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-740856 ssh -n                                                                 | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | multinode-740856-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-740856 ssh -n multinode-740856-m03 sudo cat                                   | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | /home/docker/cp-test_multinode-740856-m02_multinode-740856-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-740856 cp testdata/cp-test.txt                                                | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | multinode-740856-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-740856 ssh -n                                                                 | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | multinode-740856-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-740856 cp multinode-740856-m03:/home/docker/cp-test.txt                       | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1440328619/001/cp-test_multinode-740856-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-740856 ssh -n                                                                 | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | multinode-740856-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-740856 cp multinode-740856-m03:/home/docker/cp-test.txt                       | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | multinode-740856:/home/docker/cp-test_multinode-740856-m03_multinode-740856.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-740856 ssh -n                                                                 | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | multinode-740856-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-740856 ssh -n multinode-740856 sudo cat                                       | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | /home/docker/cp-test_multinode-740856-m03_multinode-740856.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-740856 cp multinode-740856-m03:/home/docker/cp-test.txt                       | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | multinode-740856-m02:/home/docker/cp-test_multinode-740856-m03_multinode-740856-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-740856 ssh -n                                                                 | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | multinode-740856-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-740856 ssh -n multinode-740856-m02 sudo cat                                   | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | /home/docker/cp-test_multinode-740856-m03_multinode-740856-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-740856 node stop m03                                                          | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	| node    | multinode-740856 node start                                                             | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-740856                                                                | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC |                     |
	| stop    | -p multinode-740856                                                                     | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC |                     |
	| start   | -p multinode-740856                                                                     | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:27 UTC | 14 Oct 24 14:31 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-740856                                                                | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:31 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 14:27:49
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 14:27:49.143445   43353 out.go:345] Setting OutFile to fd 1 ...
	I1014 14:27:49.143698   43353 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:27:49.143707   43353 out.go:358] Setting ErrFile to fd 2...
	I1014 14:27:49.143712   43353 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:27:49.143874   43353 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
	I1014 14:27:49.144386   43353 out.go:352] Setting JSON to false
	I1014 14:27:49.145217   43353 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4219,"bootTime":1728911850,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 14:27:49.145315   43353 start.go:139] virtualization: kvm guest
	I1014 14:27:49.147828   43353 out.go:177] * [multinode-740856] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1014 14:27:49.149302   43353 notify.go:220] Checking for updates...
	I1014 14:27:49.149336   43353 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 14:27:49.150946   43353 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 14:27:49.152546   43353 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 14:27:49.153988   43353 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 14:27:49.155285   43353 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 14:27:49.156564   43353 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 14:27:49.158222   43353 config.go:182] Loaded profile config "multinode-740856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 14:27:49.158301   43353 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 14:27:49.158747   43353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:27:49.158817   43353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:27:49.173925   43353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39965
	I1014 14:27:49.174428   43353 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:27:49.175038   43353 main.go:141] libmachine: Using API Version  1
	I1014 14:27:49.175067   43353 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:27:49.175376   43353 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:27:49.175586   43353 main.go:141] libmachine: (multinode-740856) Calling .DriverName
	I1014 14:27:49.210516   43353 out.go:177] * Using the kvm2 driver based on existing profile
	I1014 14:27:49.211623   43353 start.go:297] selected driver: kvm2
	I1014 14:27:49.211635   43353 start.go:901] validating driver "kvm2" against &{Name:multinode-740856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-740856 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.46 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.81 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.82 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fa
lse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 14:27:49.211753   43353 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 14:27:49.212070   43353 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 14:27:49.212132   43353 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19790-7836/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 14:27:49.226728   43353 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1014 14:27:49.227362   43353 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 14:27:49.227400   43353 cni.go:84] Creating CNI manager for ""
	I1014 14:27:49.227448   43353 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1014 14:27:49.227500   43353 start.go:340] cluster config:
	{Name:multinode-740856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-740856 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.46 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.81 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.82 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner
:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 14:27:49.227638   43353 iso.go:125] acquiring lock: {Name:mk2e2e780b05ead4007a93e6b56d28c6081926bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 14:27:49.229364   43353 out.go:177] * Starting "multinode-740856" primary control-plane node in "multinode-740856" cluster
	I1014 14:27:49.230603   43353 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 14:27:49.230640   43353 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1014 14:27:49.230650   43353 cache.go:56] Caching tarball of preloaded images
	I1014 14:27:49.230734   43353 preload.go:172] Found /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 14:27:49.230748   43353 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1014 14:27:49.230853   43353 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/multinode-740856/config.json ...
	I1014 14:27:49.231026   43353 start.go:360] acquireMachinesLock for multinode-740856: {Name:mk0b928e3a7c606d1470b2064477a22c5e59969d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 14:27:49.231064   43353 start.go:364] duration metric: took 21.342µs to acquireMachinesLock for "multinode-740856"
	I1014 14:27:49.231081   43353 start.go:96] Skipping create...Using existing machine configuration
	I1014 14:27:49.231090   43353 fix.go:54] fixHost starting: 
	I1014 14:27:49.231335   43353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:27:49.231373   43353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:27:49.245974   43353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37643
	I1014 14:27:49.246401   43353 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:27:49.246908   43353 main.go:141] libmachine: Using API Version  1
	I1014 14:27:49.246929   43353 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:27:49.247240   43353 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:27:49.247413   43353 main.go:141] libmachine: (multinode-740856) Calling .DriverName
	I1014 14:27:49.247532   43353 main.go:141] libmachine: (multinode-740856) Calling .GetState
	I1014 14:27:49.248894   43353 fix.go:112] recreateIfNeeded on multinode-740856: state=Running err=<nil>
	W1014 14:27:49.248909   43353 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 14:27:49.250532   43353 out.go:177] * Updating the running kvm2 "multinode-740856" VM ...
	I1014 14:27:49.251683   43353 machine.go:93] provisionDockerMachine start ...
	I1014 14:27:49.251701   43353 main.go:141] libmachine: (multinode-740856) Calling .DriverName
	I1014 14:27:49.251869   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHHostname
	I1014 14:27:49.254069   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:27:49.254462   43353 main.go:141] libmachine: (multinode-740856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cf:00", ip: ""} in network mk-multinode-740856: {Iface:virbr1 ExpiryTime:2024-10-14 15:22:28 +0000 UTC Type:0 Mac:52:54:00:75:cf:00 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-740856 Clientid:01:52:54:00:75:cf:00}
	I1014 14:27:49.254497   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined IP address 192.168.39.46 and MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:27:49.254573   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHPort
	I1014 14:27:49.254737   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHKeyPath
	I1014 14:27:49.254848   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHKeyPath
	I1014 14:27:49.254947   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHUsername
	I1014 14:27:49.255063   43353 main.go:141] libmachine: Using SSH client type: native
	I1014 14:27:49.255283   43353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.46 22 <nil> <nil>}
	I1014 14:27:49.255295   43353 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 14:27:49.371843   43353 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-740856
	
	I1014 14:27:49.371884   43353 main.go:141] libmachine: (multinode-740856) Calling .GetMachineName
	I1014 14:27:49.372143   43353 buildroot.go:166] provisioning hostname "multinode-740856"
	I1014 14:27:49.372170   43353 main.go:141] libmachine: (multinode-740856) Calling .GetMachineName
	I1014 14:27:49.372348   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHHostname
	I1014 14:27:49.375030   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:27:49.375396   43353 main.go:141] libmachine: (multinode-740856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cf:00", ip: ""} in network mk-multinode-740856: {Iface:virbr1 ExpiryTime:2024-10-14 15:22:28 +0000 UTC Type:0 Mac:52:54:00:75:cf:00 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-740856 Clientid:01:52:54:00:75:cf:00}
	I1014 14:27:49.375427   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined IP address 192.168.39.46 and MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:27:49.375504   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHPort
	I1014 14:27:49.375677   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHKeyPath
	I1014 14:27:49.375830   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHKeyPath
	I1014 14:27:49.375976   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHUsername
	I1014 14:27:49.376131   43353 main.go:141] libmachine: Using SSH client type: native
	I1014 14:27:49.376350   43353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.46 22 <nil> <nil>}
	I1014 14:27:49.376367   43353 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-740856 && echo "multinode-740856" | sudo tee /etc/hostname
	I1014 14:27:49.498749   43353 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-740856
	
	I1014 14:27:49.498785   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHHostname
	I1014 14:27:49.501700   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:27:49.502092   43353 main.go:141] libmachine: (multinode-740856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cf:00", ip: ""} in network mk-multinode-740856: {Iface:virbr1 ExpiryTime:2024-10-14 15:22:28 +0000 UTC Type:0 Mac:52:54:00:75:cf:00 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-740856 Clientid:01:52:54:00:75:cf:00}
	I1014 14:27:49.502118   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined IP address 192.168.39.46 and MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:27:49.502337   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHPort
	I1014 14:27:49.502511   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHKeyPath
	I1014 14:27:49.502671   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHKeyPath
	I1014 14:27:49.502817   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHUsername
	I1014 14:27:49.502973   43353 main.go:141] libmachine: Using SSH client type: native
	I1014 14:27:49.503133   43353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.46 22 <nil> <nil>}
	I1014 14:27:49.503149   43353 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-740856' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-740856/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-740856' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 14:27:49.612029   43353 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 14:27:49.612055   43353 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 14:27:49.612082   43353 buildroot.go:174] setting up certificates
	I1014 14:27:49.612090   43353 provision.go:84] configureAuth start
	I1014 14:27:49.612099   43353 main.go:141] libmachine: (multinode-740856) Calling .GetMachineName
	I1014 14:27:49.612328   43353 main.go:141] libmachine: (multinode-740856) Calling .GetIP
	I1014 14:27:49.615108   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:27:49.615511   43353 main.go:141] libmachine: (multinode-740856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cf:00", ip: ""} in network mk-multinode-740856: {Iface:virbr1 ExpiryTime:2024-10-14 15:22:28 +0000 UTC Type:0 Mac:52:54:00:75:cf:00 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-740856 Clientid:01:52:54:00:75:cf:00}
	I1014 14:27:49.615536   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined IP address 192.168.39.46 and MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:27:49.615721   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHHostname
	I1014 14:27:49.617783   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:27:49.618105   43353 main.go:141] libmachine: (multinode-740856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cf:00", ip: ""} in network mk-multinode-740856: {Iface:virbr1 ExpiryTime:2024-10-14 15:22:28 +0000 UTC Type:0 Mac:52:54:00:75:cf:00 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-740856 Clientid:01:52:54:00:75:cf:00}
	I1014 14:27:49.618131   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined IP address 192.168.39.46 and MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:27:49.618233   43353 provision.go:143] copyHostCerts
	I1014 14:27:49.618263   43353 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 14:27:49.618295   43353 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 14:27:49.618304   43353 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 14:27:49.618370   43353 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 14:27:49.618458   43353 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 14:27:49.618482   43353 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 14:27:49.618491   43353 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 14:27:49.618529   43353 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 14:27:49.618584   43353 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 14:27:49.618623   43353 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 14:27:49.618631   43353 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 14:27:49.618659   43353 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 14:27:49.618725   43353 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.multinode-740856 san=[127.0.0.1 192.168.39.46 localhost minikube multinode-740856]
	I1014 14:27:49.731653   43353 provision.go:177] copyRemoteCerts
	I1014 14:27:49.731705   43353 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 14:27:49.731726   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHHostname
	I1014 14:27:49.734442   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:27:49.734833   43353 main.go:141] libmachine: (multinode-740856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cf:00", ip: ""} in network mk-multinode-740856: {Iface:virbr1 ExpiryTime:2024-10-14 15:22:28 +0000 UTC Type:0 Mac:52:54:00:75:cf:00 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-740856 Clientid:01:52:54:00:75:cf:00}
	I1014 14:27:49.734869   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined IP address 192.168.39.46 and MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:27:49.735021   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHPort
	I1014 14:27:49.735190   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHKeyPath
	I1014 14:27:49.735320   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHUsername
	I1014 14:27:49.735469   43353 sshutil.go:53] new ssh client: &{IP:192.168.39.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/multinode-740856/id_rsa Username:docker}
	I1014 14:27:49.821856   43353 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 14:27:49.821918   43353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 14:27:49.854231   43353 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 14:27:49.854309   43353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1014 14:27:49.882173   43353 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 14:27:49.882234   43353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 14:27:49.910844   43353 provision.go:87] duration metric: took 298.740803ms to configureAuth
	I1014 14:27:49.910873   43353 buildroot.go:189] setting minikube options for container-runtime
	I1014 14:27:49.911142   43353 config.go:182] Loaded profile config "multinode-740856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 14:27:49.911221   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHHostname
	I1014 14:27:49.913605   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:27:49.913989   43353 main.go:141] libmachine: (multinode-740856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cf:00", ip: ""} in network mk-multinode-740856: {Iface:virbr1 ExpiryTime:2024-10-14 15:22:28 +0000 UTC Type:0 Mac:52:54:00:75:cf:00 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-740856 Clientid:01:52:54:00:75:cf:00}
	I1014 14:27:49.914014   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined IP address 192.168.39.46 and MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:27:49.914182   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHPort
	I1014 14:27:49.914342   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHKeyPath
	I1014 14:27:49.914485   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHKeyPath
	I1014 14:27:49.914618   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHUsername
	I1014 14:27:49.914759   43353 main.go:141] libmachine: Using SSH client type: native
	I1014 14:27:49.914913   43353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.46 22 <nil> <nil>}
	I1014 14:27:49.914926   43353 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 14:29:20.716609   43353 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 14:29:20.716638   43353 machine.go:96] duration metric: took 1m31.464940879s to provisionDockerMachine
	I1014 14:29:20.716652   43353 start.go:293] postStartSetup for "multinode-740856" (driver="kvm2")
	I1014 14:29:20.716667   43353 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 14:29:20.716687   43353 main.go:141] libmachine: (multinode-740856) Calling .DriverName
	I1014 14:29:20.716989   43353 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 14:29:20.717031   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHHostname
	I1014 14:29:20.720378   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:29:20.720864   43353 main.go:141] libmachine: (multinode-740856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cf:00", ip: ""} in network mk-multinode-740856: {Iface:virbr1 ExpiryTime:2024-10-14 15:22:28 +0000 UTC Type:0 Mac:52:54:00:75:cf:00 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-740856 Clientid:01:52:54:00:75:cf:00}
	I1014 14:29:20.720901   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined IP address 192.168.39.46 and MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:29:20.721060   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHPort
	I1014 14:29:20.721236   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHKeyPath
	I1014 14:29:20.721418   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHUsername
	I1014 14:29:20.721570   43353 sshutil.go:53] new ssh client: &{IP:192.168.39.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/multinode-740856/id_rsa Username:docker}
	I1014 14:29:20.807236   43353 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 14:29:20.812081   43353 command_runner.go:130] > NAME=Buildroot
	I1014 14:29:20.812102   43353 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1014 14:29:20.812106   43353 command_runner.go:130] > ID=buildroot
	I1014 14:29:20.812110   43353 command_runner.go:130] > VERSION_ID=2023.02.9
	I1014 14:29:20.812115   43353 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1014 14:29:20.812145   43353 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 14:29:20.812159   43353 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 14:29:20.812221   43353 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 14:29:20.812316   43353 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 14:29:20.812328   43353 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> /etc/ssl/certs/150232.pem
	I1014 14:29:20.812434   43353 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 14:29:20.821853   43353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 14:29:20.846581   43353 start.go:296] duration metric: took 129.893045ms for postStartSetup
	I1014 14:29:20.846638   43353 fix.go:56] duration metric: took 1m31.615546944s for fixHost
	I1014 14:29:20.846661   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHHostname
	I1014 14:29:20.849129   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:29:20.849517   43353 main.go:141] libmachine: (multinode-740856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cf:00", ip: ""} in network mk-multinode-740856: {Iface:virbr1 ExpiryTime:2024-10-14 15:22:28 +0000 UTC Type:0 Mac:52:54:00:75:cf:00 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-740856 Clientid:01:52:54:00:75:cf:00}
	I1014 14:29:20.849545   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined IP address 192.168.39.46 and MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:29:20.849722   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHPort
	I1014 14:29:20.849911   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHKeyPath
	I1014 14:29:20.850042   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHKeyPath
	I1014 14:29:20.850301   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHUsername
	I1014 14:29:20.850430   43353 main.go:141] libmachine: Using SSH client type: native
	I1014 14:29:20.850591   43353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.46 22 <nil> <nil>}
	I1014 14:29:20.850621   43353 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 14:29:20.955982   43353 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728916160.924797470
	
	I1014 14:29:20.956003   43353 fix.go:216] guest clock: 1728916160.924797470
	I1014 14:29:20.956009   43353 fix.go:229] Guest: 2024-10-14 14:29:20.92479747 +0000 UTC Remote: 2024-10-14 14:29:20.846643527 +0000 UTC m=+91.739532368 (delta=78.153943ms)
	I1014 14:29:20.956028   43353 fix.go:200] guest clock delta is within tolerance: 78.153943ms
	I1014 14:29:20.956034   43353 start.go:83] releasing machines lock for "multinode-740856", held for 1m31.724959548s
	I1014 14:29:20.956055   43353 main.go:141] libmachine: (multinode-740856) Calling .DriverName
	I1014 14:29:20.956354   43353 main.go:141] libmachine: (multinode-740856) Calling .GetIP
	I1014 14:29:20.958830   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:29:20.959128   43353 main.go:141] libmachine: (multinode-740856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cf:00", ip: ""} in network mk-multinode-740856: {Iface:virbr1 ExpiryTime:2024-10-14 15:22:28 +0000 UTC Type:0 Mac:52:54:00:75:cf:00 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-740856 Clientid:01:52:54:00:75:cf:00}
	I1014 14:29:20.959155   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined IP address 192.168.39.46 and MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:29:20.959254   43353 main.go:141] libmachine: (multinode-740856) Calling .DriverName
	I1014 14:29:20.959809   43353 main.go:141] libmachine: (multinode-740856) Calling .DriverName
	I1014 14:29:20.959970   43353 main.go:141] libmachine: (multinode-740856) Calling .DriverName
	I1014 14:29:20.960045   43353 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 14:29:20.960087   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHHostname
	I1014 14:29:20.960148   43353 ssh_runner.go:195] Run: cat /version.json
	I1014 14:29:20.960167   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHHostname
	I1014 14:29:20.962562   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:29:20.962915   43353 main.go:141] libmachine: (multinode-740856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cf:00", ip: ""} in network mk-multinode-740856: {Iface:virbr1 ExpiryTime:2024-10-14 15:22:28 +0000 UTC Type:0 Mac:52:54:00:75:cf:00 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-740856 Clientid:01:52:54:00:75:cf:00}
	I1014 14:29:20.962933   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined IP address 192.168.39.46 and MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:29:20.963003   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:29:20.963099   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHPort
	I1014 14:29:20.963242   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHKeyPath
	I1014 14:29:20.963383   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHUsername
	I1014 14:29:20.963469   43353 main.go:141] libmachine: (multinode-740856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cf:00", ip: ""} in network mk-multinode-740856: {Iface:virbr1 ExpiryTime:2024-10-14 15:22:28 +0000 UTC Type:0 Mac:52:54:00:75:cf:00 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-740856 Clientid:01:52:54:00:75:cf:00}
	I1014 14:29:20.963500   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined IP address 192.168.39.46 and MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:29:20.963520   43353 sshutil.go:53] new ssh client: &{IP:192.168.39.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/multinode-740856/id_rsa Username:docker}
	I1014 14:29:20.963693   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHPort
	I1014 14:29:20.963851   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHKeyPath
	I1014 14:29:20.963989   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHUsername
	I1014 14:29:20.964130   43353 sshutil.go:53] new ssh client: &{IP:192.168.39.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/multinode-740856/id_rsa Username:docker}
	I1014 14:29:21.078743   43353 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1014 14:29:21.079368   43353 command_runner.go:130] > {"iso_version": "v1.34.0-1728382514-19774", "kicbase_version": "v0.0.45-1728063813-19756", "minikube_version": "v1.34.0", "commit": "cf9f11c2b0369efc07a929c4a1fdb2b4b3c62ee9"}
	I1014 14:29:21.079499   43353 ssh_runner.go:195] Run: systemctl --version
	I1014 14:29:21.085655   43353 command_runner.go:130] > systemd 252 (252)
	I1014 14:29:21.085684   43353 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1014 14:29:21.085869   43353 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 14:29:21.244883   43353 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1014 14:29:21.251035   43353 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1014 14:29:21.251178   43353 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 14:29:21.251258   43353 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 14:29:21.260485   43353 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 14:29:21.260500   43353 start.go:495] detecting cgroup driver to use...
	I1014 14:29:21.260552   43353 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 14:29:21.277330   43353 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 14:29:21.293794   43353 docker.go:217] disabling cri-docker service (if available) ...
	I1014 14:29:21.293887   43353 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 14:29:21.310117   43353 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 14:29:21.324402   43353 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 14:29:21.473064   43353 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 14:29:21.618736   43353 docker.go:233] disabling docker service ...
	I1014 14:29:21.618804   43353 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 14:29:21.636363   43353 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 14:29:21.650091   43353 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 14:29:21.795372   43353 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 14:29:21.938740   43353 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 14:29:21.953125   43353 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 14:29:21.972592   43353 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1014 14:29:21.973131   43353 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 14:29:21.973202   43353 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:29:21.983944   43353 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 14:29:21.983999   43353 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:29:21.994609   43353 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:29:22.005159   43353 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:29:22.015615   43353 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 14:29:22.027062   43353 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:29:22.045771   43353 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:29:22.057415   43353 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:29:22.069190   43353 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 14:29:22.080396   43353 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1014 14:29:22.080588   43353 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 14:29:22.089980   43353 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 14:29:22.223198   43353 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 14:29:24.045521   43353 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.822292246s)
	I1014 14:29:24.045548   43353 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 14:29:24.045609   43353 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 14:29:24.053107   43353 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1014 14:29:24.053136   43353 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1014 14:29:24.053158   43353 command_runner.go:130] > Device: 0,22	Inode: 1286        Links: 1
	I1014 14:29:24.053169   43353 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1014 14:29:24.053177   43353 command_runner.go:130] > Access: 2024-10-14 14:29:23.922934965 +0000
	I1014 14:29:24.053186   43353 command_runner.go:130] > Modify: 2024-10-14 14:29:23.902934433 +0000
	I1014 14:29:24.053195   43353 command_runner.go:130] > Change: 2024-10-14 14:29:23.902934433 +0000
	I1014 14:29:24.053200   43353 command_runner.go:130] >  Birth: -
	I1014 14:29:24.053222   43353 start.go:563] Will wait 60s for crictl version
	I1014 14:29:24.053273   43353 ssh_runner.go:195] Run: which crictl
	I1014 14:29:24.058410   43353 command_runner.go:130] > /usr/bin/crictl
	I1014 14:29:24.058530   43353 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 14:29:24.093091   43353 command_runner.go:130] > Version:  0.1.0
	I1014 14:29:24.093118   43353 command_runner.go:130] > RuntimeName:  cri-o
	I1014 14:29:24.093126   43353 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1014 14:29:24.093134   43353 command_runner.go:130] > RuntimeApiVersion:  v1
	I1014 14:29:24.093156   43353 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 14:29:24.093220   43353 ssh_runner.go:195] Run: crio --version
	I1014 14:29:24.119856   43353 command_runner.go:130] > crio version 1.29.1
	I1014 14:29:24.119881   43353 command_runner.go:130] > Version:        1.29.1
	I1014 14:29:24.119891   43353 command_runner.go:130] > GitCommit:      unknown
	I1014 14:29:24.119898   43353 command_runner.go:130] > GitCommitDate:  unknown
	I1014 14:29:24.119905   43353 command_runner.go:130] > GitTreeState:   clean
	I1014 14:29:24.119913   43353 command_runner.go:130] > BuildDate:      2024-10-08T15:57:16Z
	I1014 14:29:24.119920   43353 command_runner.go:130] > GoVersion:      go1.21.6
	I1014 14:29:24.119927   43353 command_runner.go:130] > Compiler:       gc
	I1014 14:29:24.119934   43353 command_runner.go:130] > Platform:       linux/amd64
	I1014 14:29:24.119941   43353 command_runner.go:130] > Linkmode:       dynamic
	I1014 14:29:24.119951   43353 command_runner.go:130] > BuildTags:      
	I1014 14:29:24.119963   43353 command_runner.go:130] >   containers_image_ostree_stub
	I1014 14:29:24.119971   43353 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1014 14:29:24.119977   43353 command_runner.go:130] >   btrfs_noversion
	I1014 14:29:24.119988   43353 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1014 14:29:24.119995   43353 command_runner.go:130] >   libdm_no_deferred_remove
	I1014 14:29:24.120003   43353 command_runner.go:130] >   seccomp
	I1014 14:29:24.120010   43353 command_runner.go:130] > LDFlags:          unknown
	I1014 14:29:24.120020   43353 command_runner.go:130] > SeccompEnabled:   true
	I1014 14:29:24.120027   43353 command_runner.go:130] > AppArmorEnabled:  false
	I1014 14:29:24.121163   43353 ssh_runner.go:195] Run: crio --version
	I1014 14:29:24.148523   43353 command_runner.go:130] > crio version 1.29.1
	I1014 14:29:24.148541   43353 command_runner.go:130] > Version:        1.29.1
	I1014 14:29:24.148561   43353 command_runner.go:130] > GitCommit:      unknown
	I1014 14:29:24.148568   43353 command_runner.go:130] > GitCommitDate:  unknown
	I1014 14:29:24.148590   43353 command_runner.go:130] > GitTreeState:   clean
	I1014 14:29:24.148598   43353 command_runner.go:130] > BuildDate:      2024-10-08T15:57:16Z
	I1014 14:29:24.148602   43353 command_runner.go:130] > GoVersion:      go1.21.6
	I1014 14:29:24.148606   43353 command_runner.go:130] > Compiler:       gc
	I1014 14:29:24.148613   43353 command_runner.go:130] > Platform:       linux/amd64
	I1014 14:29:24.148617   43353 command_runner.go:130] > Linkmode:       dynamic
	I1014 14:29:24.148622   43353 command_runner.go:130] > BuildTags:      
	I1014 14:29:24.148627   43353 command_runner.go:130] >   containers_image_ostree_stub
	I1014 14:29:24.148631   43353 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1014 14:29:24.148637   43353 command_runner.go:130] >   btrfs_noversion
	I1014 14:29:24.148641   43353 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1014 14:29:24.148645   43353 command_runner.go:130] >   libdm_no_deferred_remove
	I1014 14:29:24.148651   43353 command_runner.go:130] >   seccomp
	I1014 14:29:24.148657   43353 command_runner.go:130] > LDFlags:          unknown
	I1014 14:29:24.148667   43353 command_runner.go:130] > SeccompEnabled:   true
	I1014 14:29:24.148674   43353 command_runner.go:130] > AppArmorEnabled:  false
	I1014 14:29:24.151516   43353 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1014 14:29:24.152680   43353 main.go:141] libmachine: (multinode-740856) Calling .GetIP
	I1014 14:29:24.155170   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:29:24.155536   43353 main.go:141] libmachine: (multinode-740856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cf:00", ip: ""} in network mk-multinode-740856: {Iface:virbr1 ExpiryTime:2024-10-14 15:22:28 +0000 UTC Type:0 Mac:52:54:00:75:cf:00 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-740856 Clientid:01:52:54:00:75:cf:00}
	I1014 14:29:24.155563   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined IP address 192.168.39.46 and MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:29:24.155808   43353 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1014 14:29:24.160032   43353 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1014 14:29:24.160283   43353 kubeadm.go:883] updating cluster {Name:multinode-740856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-740856 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.46 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.81 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.82 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress
-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 14:29:24.160420   43353 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 14:29:24.160460   43353 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 14:29:24.204853   43353 command_runner.go:130] > {
	I1014 14:29:24.204873   43353 command_runner.go:130] >   "images": [
	I1014 14:29:24.204877   43353 command_runner.go:130] >     {
	I1014 14:29:24.204885   43353 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1014 14:29:24.204890   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.204906   43353 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1014 14:29:24.204911   43353 command_runner.go:130] >       ],
	I1014 14:29:24.204918   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.204936   43353 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1014 14:29:24.204953   43353 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1014 14:29:24.204959   43353 command_runner.go:130] >       ],
	I1014 14:29:24.204964   43353 command_runner.go:130] >       "size": "87190579",
	I1014 14:29:24.204968   43353 command_runner.go:130] >       "uid": null,
	I1014 14:29:24.204972   43353 command_runner.go:130] >       "username": "",
	I1014 14:29:24.204979   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.204984   43353 command_runner.go:130] >       "pinned": false
	I1014 14:29:24.204987   43353 command_runner.go:130] >     },
	I1014 14:29:24.204991   43353 command_runner.go:130] >     {
	I1014 14:29:24.204997   43353 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1014 14:29:24.205001   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.205009   43353 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1014 14:29:24.205013   43353 command_runner.go:130] >       ],
	I1014 14:29:24.205019   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.205033   43353 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1014 14:29:24.205047   43353 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1014 14:29:24.205068   43353 command_runner.go:130] >       ],
	I1014 14:29:24.205077   43353 command_runner.go:130] >       "size": "94965812",
	I1014 14:29:24.205082   43353 command_runner.go:130] >       "uid": null,
	I1014 14:29:24.205090   43353 command_runner.go:130] >       "username": "",
	I1014 14:29:24.205097   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.205100   43353 command_runner.go:130] >       "pinned": false
	I1014 14:29:24.205103   43353 command_runner.go:130] >     },
	I1014 14:29:24.205107   43353 command_runner.go:130] >     {
	I1014 14:29:24.205112   43353 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1014 14:29:24.205119   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.205123   43353 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1014 14:29:24.205129   43353 command_runner.go:130] >       ],
	I1014 14:29:24.205138   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.205160   43353 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1014 14:29:24.205174   43353 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1014 14:29:24.205183   43353 command_runner.go:130] >       ],
	I1014 14:29:24.205190   43353 command_runner.go:130] >       "size": "1363676",
	I1014 14:29:24.205194   43353 command_runner.go:130] >       "uid": null,
	I1014 14:29:24.205201   43353 command_runner.go:130] >       "username": "",
	I1014 14:29:24.205205   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.205209   43353 command_runner.go:130] >       "pinned": false
	I1014 14:29:24.205214   43353 command_runner.go:130] >     },
	I1014 14:29:24.205217   43353 command_runner.go:130] >     {
	I1014 14:29:24.205223   43353 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1014 14:29:24.205232   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.205243   43353 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1014 14:29:24.205251   43353 command_runner.go:130] >       ],
	I1014 14:29:24.205259   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.205276   43353 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1014 14:29:24.205298   43353 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1014 14:29:24.205305   43353 command_runner.go:130] >       ],
	I1014 14:29:24.205314   43353 command_runner.go:130] >       "size": "31470524",
	I1014 14:29:24.205323   43353 command_runner.go:130] >       "uid": null,
	I1014 14:29:24.205333   43353 command_runner.go:130] >       "username": "",
	I1014 14:29:24.205340   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.205349   43353 command_runner.go:130] >       "pinned": false
	I1014 14:29:24.205357   43353 command_runner.go:130] >     },
	I1014 14:29:24.205365   43353 command_runner.go:130] >     {
	I1014 14:29:24.205378   43353 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1014 14:29:24.205387   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.205398   43353 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1014 14:29:24.205404   43353 command_runner.go:130] >       ],
	I1014 14:29:24.205408   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.205421   43353 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1014 14:29:24.205435   43353 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1014 14:29:24.205444   43353 command_runner.go:130] >       ],
	I1014 14:29:24.205457   43353 command_runner.go:130] >       "size": "63273227",
	I1014 14:29:24.205466   43353 command_runner.go:130] >       "uid": null,
	I1014 14:29:24.205476   43353 command_runner.go:130] >       "username": "nonroot",
	I1014 14:29:24.205484   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.205493   43353 command_runner.go:130] >       "pinned": false
	I1014 14:29:24.205500   43353 command_runner.go:130] >     },
	I1014 14:29:24.205503   43353 command_runner.go:130] >     {
	I1014 14:29:24.205514   43353 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1014 14:29:24.205524   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.205534   43353 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1014 14:29:24.205543   43353 command_runner.go:130] >       ],
	I1014 14:29:24.205550   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.205563   43353 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1014 14:29:24.205577   43353 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1014 14:29:24.205583   43353 command_runner.go:130] >       ],
	I1014 14:29:24.205588   43353 command_runner.go:130] >       "size": "149009664",
	I1014 14:29:24.205594   43353 command_runner.go:130] >       "uid": {
	I1014 14:29:24.205601   43353 command_runner.go:130] >         "value": "0"
	I1014 14:29:24.205608   43353 command_runner.go:130] >       },
	I1014 14:29:24.205615   43353 command_runner.go:130] >       "username": "",
	I1014 14:29:24.205625   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.205634   43353 command_runner.go:130] >       "pinned": false
	I1014 14:29:24.205642   43353 command_runner.go:130] >     },
	I1014 14:29:24.205651   43353 command_runner.go:130] >     {
	I1014 14:29:24.205662   43353 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1014 14:29:24.205671   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.205680   43353 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1014 14:29:24.205686   43353 command_runner.go:130] >       ],
	I1014 14:29:24.205691   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.205705   43353 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1014 14:29:24.205720   43353 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1014 14:29:24.205728   43353 command_runner.go:130] >       ],
	I1014 14:29:24.205738   43353 command_runner.go:130] >       "size": "95237600",
	I1014 14:29:24.205752   43353 command_runner.go:130] >       "uid": {
	I1014 14:29:24.205762   43353 command_runner.go:130] >         "value": "0"
	I1014 14:29:24.205769   43353 command_runner.go:130] >       },
	I1014 14:29:24.205778   43353 command_runner.go:130] >       "username": "",
	I1014 14:29:24.205785   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.205789   43353 command_runner.go:130] >       "pinned": false
	I1014 14:29:24.205794   43353 command_runner.go:130] >     },
	I1014 14:29:24.205802   43353 command_runner.go:130] >     {
	I1014 14:29:24.205815   43353 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1014 14:29:24.205824   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.205836   43353 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1014 14:29:24.205844   43353 command_runner.go:130] >       ],
	I1014 14:29:24.205853   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.205880   43353 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1014 14:29:24.205895   43353 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1014 14:29:24.205901   43353 command_runner.go:130] >       ],
	I1014 14:29:24.205908   43353 command_runner.go:130] >       "size": "89437508",
	I1014 14:29:24.205913   43353 command_runner.go:130] >       "uid": {
	I1014 14:29:24.205920   43353 command_runner.go:130] >         "value": "0"
	I1014 14:29:24.205927   43353 command_runner.go:130] >       },
	I1014 14:29:24.205935   43353 command_runner.go:130] >       "username": "",
	I1014 14:29:24.205942   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.205948   43353 command_runner.go:130] >       "pinned": false
	I1014 14:29:24.205953   43353 command_runner.go:130] >     },
	I1014 14:29:24.205958   43353 command_runner.go:130] >     {
	I1014 14:29:24.205968   43353 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1014 14:29:24.205974   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.205982   43353 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1014 14:29:24.205985   43353 command_runner.go:130] >       ],
	I1014 14:29:24.205991   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.206005   43353 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1014 14:29:24.206019   43353 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1014 14:29:24.206028   43353 command_runner.go:130] >       ],
	I1014 14:29:24.206043   43353 command_runner.go:130] >       "size": "92733849",
	I1014 14:29:24.206052   43353 command_runner.go:130] >       "uid": null,
	I1014 14:29:24.206065   43353 command_runner.go:130] >       "username": "",
	I1014 14:29:24.206071   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.206080   43353 command_runner.go:130] >       "pinned": false
	I1014 14:29:24.206084   43353 command_runner.go:130] >     },
	I1014 14:29:24.206090   43353 command_runner.go:130] >     {
	I1014 14:29:24.206098   43353 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1014 14:29:24.206108   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.206116   43353 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1014 14:29:24.206124   43353 command_runner.go:130] >       ],
	I1014 14:29:24.206131   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.206144   43353 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1014 14:29:24.206158   43353 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1014 14:29:24.206167   43353 command_runner.go:130] >       ],
	I1014 14:29:24.206173   43353 command_runner.go:130] >       "size": "68420934",
	I1014 14:29:24.206179   43353 command_runner.go:130] >       "uid": {
	I1014 14:29:24.206183   43353 command_runner.go:130] >         "value": "0"
	I1014 14:29:24.206188   43353 command_runner.go:130] >       },
	I1014 14:29:24.206196   43353 command_runner.go:130] >       "username": "",
	I1014 14:29:24.206202   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.206211   43353 command_runner.go:130] >       "pinned": false
	I1014 14:29:24.206216   43353 command_runner.go:130] >     },
	I1014 14:29:24.206224   43353 command_runner.go:130] >     {
	I1014 14:29:24.206242   43353 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1014 14:29:24.206251   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.206259   43353 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1014 14:29:24.206267   43353 command_runner.go:130] >       ],
	I1014 14:29:24.206276   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.206290   43353 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1014 14:29:24.206304   43353 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1014 14:29:24.206314   43353 command_runner.go:130] >       ],
	I1014 14:29:24.206323   43353 command_runner.go:130] >       "size": "742080",
	I1014 14:29:24.206337   43353 command_runner.go:130] >       "uid": {
	I1014 14:29:24.206347   43353 command_runner.go:130] >         "value": "65535"
	I1014 14:29:24.206354   43353 command_runner.go:130] >       },
	I1014 14:29:24.206358   43353 command_runner.go:130] >       "username": "",
	I1014 14:29:24.206365   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.206371   43353 command_runner.go:130] >       "pinned": true
	I1014 14:29:24.206380   43353 command_runner.go:130] >     }
	I1014 14:29:24.206387   43353 command_runner.go:130] >   ]
	I1014 14:29:24.206396   43353 command_runner.go:130] > }
	I1014 14:29:24.206625   43353 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 14:29:24.206642   43353 crio.go:433] Images already preloaded, skipping extraction
	I1014 14:29:24.206695   43353 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 14:29:24.245145   43353 command_runner.go:130] > {
	I1014 14:29:24.245169   43353 command_runner.go:130] >   "images": [
	I1014 14:29:24.245174   43353 command_runner.go:130] >     {
	I1014 14:29:24.245186   43353 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1014 14:29:24.245192   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.245201   43353 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1014 14:29:24.245206   43353 command_runner.go:130] >       ],
	I1014 14:29:24.245213   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.245226   43353 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1014 14:29:24.245240   43353 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1014 14:29:24.245246   43353 command_runner.go:130] >       ],
	I1014 14:29:24.245256   43353 command_runner.go:130] >       "size": "87190579",
	I1014 14:29:24.245262   43353 command_runner.go:130] >       "uid": null,
	I1014 14:29:24.245268   43353 command_runner.go:130] >       "username": "",
	I1014 14:29:24.245275   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.245281   43353 command_runner.go:130] >       "pinned": false
	I1014 14:29:24.245289   43353 command_runner.go:130] >     },
	I1014 14:29:24.245294   43353 command_runner.go:130] >     {
	I1014 14:29:24.245306   43353 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1014 14:29:24.245312   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.245322   43353 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1014 14:29:24.245341   43353 command_runner.go:130] >       ],
	I1014 14:29:24.245350   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.245361   43353 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1014 14:29:24.245375   43353 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1014 14:29:24.245383   43353 command_runner.go:130] >       ],
	I1014 14:29:24.245400   43353 command_runner.go:130] >       "size": "94965812",
	I1014 14:29:24.245409   43353 command_runner.go:130] >       "uid": null,
	I1014 14:29:24.245421   43353 command_runner.go:130] >       "username": "",
	I1014 14:29:24.245427   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.245433   43353 command_runner.go:130] >       "pinned": false
	I1014 14:29:24.245441   43353 command_runner.go:130] >     },
	I1014 14:29:24.245446   43353 command_runner.go:130] >     {
	I1014 14:29:24.245458   43353 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1014 14:29:24.245466   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.245476   43353 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1014 14:29:24.245484   43353 command_runner.go:130] >       ],
	I1014 14:29:24.245494   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.245509   43353 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1014 14:29:24.245523   43353 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1014 14:29:24.245531   43353 command_runner.go:130] >       ],
	I1014 14:29:24.245538   43353 command_runner.go:130] >       "size": "1363676",
	I1014 14:29:24.245544   43353 command_runner.go:130] >       "uid": null,
	I1014 14:29:24.245548   43353 command_runner.go:130] >       "username": "",
	I1014 14:29:24.245554   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.245558   43353 command_runner.go:130] >       "pinned": false
	I1014 14:29:24.245564   43353 command_runner.go:130] >     },
	I1014 14:29:24.245567   43353 command_runner.go:130] >     {
	I1014 14:29:24.245573   43353 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1014 14:29:24.245579   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.245584   43353 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1014 14:29:24.245589   43353 command_runner.go:130] >       ],
	I1014 14:29:24.245593   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.245602   43353 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1014 14:29:24.245624   43353 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1014 14:29:24.245630   43353 command_runner.go:130] >       ],
	I1014 14:29:24.245635   43353 command_runner.go:130] >       "size": "31470524",
	I1014 14:29:24.245639   43353 command_runner.go:130] >       "uid": null,
	I1014 14:29:24.245645   43353 command_runner.go:130] >       "username": "",
	I1014 14:29:24.245649   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.245655   43353 command_runner.go:130] >       "pinned": false
	I1014 14:29:24.245659   43353 command_runner.go:130] >     },
	I1014 14:29:24.245664   43353 command_runner.go:130] >     {
	I1014 14:29:24.245670   43353 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1014 14:29:24.245676   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.245681   43353 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1014 14:29:24.245687   43353 command_runner.go:130] >       ],
	I1014 14:29:24.245691   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.245701   43353 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1014 14:29:24.245714   43353 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1014 14:29:24.245721   43353 command_runner.go:130] >       ],
	I1014 14:29:24.245728   43353 command_runner.go:130] >       "size": "63273227",
	I1014 14:29:24.245734   43353 command_runner.go:130] >       "uid": null,
	I1014 14:29:24.245743   43353 command_runner.go:130] >       "username": "nonroot",
	I1014 14:29:24.245749   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.245758   43353 command_runner.go:130] >       "pinned": false
	I1014 14:29:24.245766   43353 command_runner.go:130] >     },
	I1014 14:29:24.245775   43353 command_runner.go:130] >     {
	I1014 14:29:24.245785   43353 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1014 14:29:24.245794   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.245801   43353 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1014 14:29:24.245809   43353 command_runner.go:130] >       ],
	I1014 14:29:24.245815   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.245822   43353 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1014 14:29:24.245831   43353 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1014 14:29:24.245837   43353 command_runner.go:130] >       ],
	I1014 14:29:24.245840   43353 command_runner.go:130] >       "size": "149009664",
	I1014 14:29:24.245852   43353 command_runner.go:130] >       "uid": {
	I1014 14:29:24.245859   43353 command_runner.go:130] >         "value": "0"
	I1014 14:29:24.245863   43353 command_runner.go:130] >       },
	I1014 14:29:24.245869   43353 command_runner.go:130] >       "username": "",
	I1014 14:29:24.245873   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.245879   43353 command_runner.go:130] >       "pinned": false
	I1014 14:29:24.245883   43353 command_runner.go:130] >     },
	I1014 14:29:24.245888   43353 command_runner.go:130] >     {
	I1014 14:29:24.245894   43353 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1014 14:29:24.245900   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.245905   43353 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1014 14:29:24.245910   43353 command_runner.go:130] >       ],
	I1014 14:29:24.245915   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.245924   43353 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1014 14:29:24.245933   43353 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1014 14:29:24.245938   43353 command_runner.go:130] >       ],
	I1014 14:29:24.245942   43353 command_runner.go:130] >       "size": "95237600",
	I1014 14:29:24.245948   43353 command_runner.go:130] >       "uid": {
	I1014 14:29:24.245952   43353 command_runner.go:130] >         "value": "0"
	I1014 14:29:24.245957   43353 command_runner.go:130] >       },
	I1014 14:29:24.245961   43353 command_runner.go:130] >       "username": "",
	I1014 14:29:24.245967   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.245970   43353 command_runner.go:130] >       "pinned": false
	I1014 14:29:24.245976   43353 command_runner.go:130] >     },
	I1014 14:29:24.245979   43353 command_runner.go:130] >     {
	I1014 14:29:24.245986   43353 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1014 14:29:24.245992   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.245998   43353 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1014 14:29:24.246003   43353 command_runner.go:130] >       ],
	I1014 14:29:24.246007   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.246029   43353 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1014 14:29:24.246039   43353 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1014 14:29:24.246044   43353 command_runner.go:130] >       ],
	I1014 14:29:24.246053   43353 command_runner.go:130] >       "size": "89437508",
	I1014 14:29:24.246059   43353 command_runner.go:130] >       "uid": {
	I1014 14:29:24.246063   43353 command_runner.go:130] >         "value": "0"
	I1014 14:29:24.246068   43353 command_runner.go:130] >       },
	I1014 14:29:24.246072   43353 command_runner.go:130] >       "username": "",
	I1014 14:29:24.246078   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.246082   43353 command_runner.go:130] >       "pinned": false
	I1014 14:29:24.246088   43353 command_runner.go:130] >     },
	I1014 14:29:24.246091   43353 command_runner.go:130] >     {
	I1014 14:29:24.246097   43353 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1014 14:29:24.246103   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.246107   43353 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1014 14:29:24.246111   43353 command_runner.go:130] >       ],
	I1014 14:29:24.246115   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.246124   43353 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1014 14:29:24.246131   43353 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1014 14:29:24.246136   43353 command_runner.go:130] >       ],
	I1014 14:29:24.246141   43353 command_runner.go:130] >       "size": "92733849",
	I1014 14:29:24.246147   43353 command_runner.go:130] >       "uid": null,
	I1014 14:29:24.246151   43353 command_runner.go:130] >       "username": "",
	I1014 14:29:24.246157   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.246161   43353 command_runner.go:130] >       "pinned": false
	I1014 14:29:24.246166   43353 command_runner.go:130] >     },
	I1014 14:29:24.246169   43353 command_runner.go:130] >     {
	I1014 14:29:24.246177   43353 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1014 14:29:24.246183   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.246188   43353 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1014 14:29:24.246193   43353 command_runner.go:130] >       ],
	I1014 14:29:24.246197   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.246206   43353 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1014 14:29:24.246213   43353 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1014 14:29:24.246219   43353 command_runner.go:130] >       ],
	I1014 14:29:24.246223   43353 command_runner.go:130] >       "size": "68420934",
	I1014 14:29:24.246233   43353 command_runner.go:130] >       "uid": {
	I1014 14:29:24.246239   43353 command_runner.go:130] >         "value": "0"
	I1014 14:29:24.246243   43353 command_runner.go:130] >       },
	I1014 14:29:24.246249   43353 command_runner.go:130] >       "username": "",
	I1014 14:29:24.246252   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.246258   43353 command_runner.go:130] >       "pinned": false
	I1014 14:29:24.246262   43353 command_runner.go:130] >     },
	I1014 14:29:24.246267   43353 command_runner.go:130] >     {
	I1014 14:29:24.246273   43353 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1014 14:29:24.246279   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.246283   43353 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1014 14:29:24.246288   43353 command_runner.go:130] >       ],
	I1014 14:29:24.246292   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.246298   43353 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1014 14:29:24.246307   43353 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1014 14:29:24.246313   43353 command_runner.go:130] >       ],
	I1014 14:29:24.246316   43353 command_runner.go:130] >       "size": "742080",
	I1014 14:29:24.246323   43353 command_runner.go:130] >       "uid": {
	I1014 14:29:24.246327   43353 command_runner.go:130] >         "value": "65535"
	I1014 14:29:24.246332   43353 command_runner.go:130] >       },
	I1014 14:29:24.246336   43353 command_runner.go:130] >       "username": "",
	I1014 14:29:24.246341   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.246345   43353 command_runner.go:130] >       "pinned": true
	I1014 14:29:24.246350   43353 command_runner.go:130] >     }
	I1014 14:29:24.246353   43353 command_runner.go:130] >   ]
	I1014 14:29:24.246359   43353 command_runner.go:130] > }
	I1014 14:29:24.246475   43353 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 14:29:24.246486   43353 cache_images.go:84] Images are preloaded, skipping loading
	I1014 14:29:24.246492   43353 kubeadm.go:934] updating node { 192.168.39.46 8443 v1.31.1 crio true true} ...
	I1014 14:29:24.246587   43353 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-740856 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.46
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-740856 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 14:29:24.246670   43353 ssh_runner.go:195] Run: crio config
	I1014 14:29:24.284495   43353 command_runner.go:130] ! time="2024-10-14 14:29:24.253352957Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1014 14:29:24.289866   43353 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1014 14:29:24.295168   43353 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1014 14:29:24.295188   43353 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1014 14:29:24.295197   43353 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1014 14:29:24.295201   43353 command_runner.go:130] > #
	I1014 14:29:24.295212   43353 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1014 14:29:24.295221   43353 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1014 14:29:24.295229   43353 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1014 14:29:24.295242   43353 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1014 14:29:24.295248   43353 command_runner.go:130] > # reload'.
	I1014 14:29:24.295261   43353 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1014 14:29:24.295271   43353 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1014 14:29:24.295284   43353 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1014 14:29:24.295293   43353 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1014 14:29:24.295301   43353 command_runner.go:130] > [crio]
	I1014 14:29:24.295314   43353 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1014 14:29:24.295325   43353 command_runner.go:130] > # containers images, in this directory.
	I1014 14:29:24.295335   43353 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1014 14:29:24.295352   43353 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1014 14:29:24.295359   43353 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1014 14:29:24.295367   43353 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1014 14:29:24.295373   43353 command_runner.go:130] > # imagestore = ""
	I1014 14:29:24.295379   43353 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1014 14:29:24.295392   43353 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1014 14:29:24.295399   43353 command_runner.go:130] > storage_driver = "overlay"
	I1014 14:29:24.295404   43353 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1014 14:29:24.295418   43353 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1014 14:29:24.295422   43353 command_runner.go:130] > storage_option = [
	I1014 14:29:24.295429   43353 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1014 14:29:24.295432   43353 command_runner.go:130] > ]
	I1014 14:29:24.295440   43353 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1014 14:29:24.295448   43353 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1014 14:29:24.295454   43353 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1014 14:29:24.295460   43353 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1014 14:29:24.295468   43353 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1014 14:29:24.295475   43353 command_runner.go:130] > # always happen on a node reboot
	I1014 14:29:24.295479   43353 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1014 14:29:24.295492   43353 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1014 14:29:24.295500   43353 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1014 14:29:24.295505   43353 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1014 14:29:24.295512   43353 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1014 14:29:24.295519   43353 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1014 14:29:24.295529   43353 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1014 14:29:24.295541   43353 command_runner.go:130] > # internal_wipe = true
	I1014 14:29:24.295551   43353 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1014 14:29:24.295558   43353 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1014 14:29:24.295562   43353 command_runner.go:130] > # internal_repair = false
	I1014 14:29:24.295570   43353 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1014 14:29:24.295575   43353 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1014 14:29:24.295583   43353 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1014 14:29:24.295588   43353 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1014 14:29:24.295595   43353 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1014 14:29:24.295599   43353 command_runner.go:130] > [crio.api]
	I1014 14:29:24.295605   43353 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1014 14:29:24.295611   43353 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1014 14:29:24.295617   43353 command_runner.go:130] > # IP address on which the stream server will listen.
	I1014 14:29:24.295623   43353 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1014 14:29:24.295629   43353 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1014 14:29:24.295636   43353 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1014 14:29:24.295644   43353 command_runner.go:130] > # stream_port = "0"
	I1014 14:29:24.295651   43353 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1014 14:29:24.295655   43353 command_runner.go:130] > # stream_enable_tls = false
	I1014 14:29:24.295663   43353 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1014 14:29:24.295667   43353 command_runner.go:130] > # stream_idle_timeout = ""
	I1014 14:29:24.295675   43353 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1014 14:29:24.295683   43353 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1014 14:29:24.295689   43353 command_runner.go:130] > # minutes.
	I1014 14:29:24.295693   43353 command_runner.go:130] > # stream_tls_cert = ""
	I1014 14:29:24.295701   43353 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1014 14:29:24.295709   43353 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1014 14:29:24.295718   43353 command_runner.go:130] > # stream_tls_key = ""
	I1014 14:29:24.295727   43353 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1014 14:29:24.295739   43353 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1014 14:29:24.295766   43353 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1014 14:29:24.295776   43353 command_runner.go:130] > # stream_tls_ca = ""
	I1014 14:29:24.295786   43353 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1014 14:29:24.295793   43353 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1014 14:29:24.295807   43353 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1014 14:29:24.295816   43353 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1014 14:29:24.295826   43353 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1014 14:29:24.295837   43353 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1014 14:29:24.295841   43353 command_runner.go:130] > [crio.runtime]
	I1014 14:29:24.295848   43353 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1014 14:29:24.295854   43353 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1014 14:29:24.295860   43353 command_runner.go:130] > # "nofile=1024:2048"
	I1014 14:29:24.295866   43353 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1014 14:29:24.295872   43353 command_runner.go:130] > # default_ulimits = [
	I1014 14:29:24.295875   43353 command_runner.go:130] > # ]
	I1014 14:29:24.295881   43353 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1014 14:29:24.295887   43353 command_runner.go:130] > # no_pivot = false
	I1014 14:29:24.295893   43353 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1014 14:29:24.295899   43353 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1014 14:29:24.295910   43353 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1014 14:29:24.295918   43353 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1014 14:29:24.295922   43353 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1014 14:29:24.295929   43353 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1014 14:29:24.295935   43353 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1014 14:29:24.295939   43353 command_runner.go:130] > # Cgroup setting for conmon
	I1014 14:29:24.295951   43353 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1014 14:29:24.295957   43353 command_runner.go:130] > conmon_cgroup = "pod"
	I1014 14:29:24.295963   43353 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1014 14:29:24.295970   43353 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1014 14:29:24.295976   43353 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1014 14:29:24.295982   43353 command_runner.go:130] > conmon_env = [
	I1014 14:29:24.295988   43353 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1014 14:29:24.295994   43353 command_runner.go:130] > ]
	I1014 14:29:24.295999   43353 command_runner.go:130] > # Additional environment variables to set for all the
	I1014 14:29:24.296006   43353 command_runner.go:130] > # containers. These are overridden if set in the
	I1014 14:29:24.296011   43353 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1014 14:29:24.296017   43353 command_runner.go:130] > # default_env = [
	I1014 14:29:24.296021   43353 command_runner.go:130] > # ]
	I1014 14:29:24.296027   43353 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1014 14:29:24.296036   43353 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1014 14:29:24.296042   43353 command_runner.go:130] > # selinux = false
	I1014 14:29:24.296048   43353 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1014 14:29:24.296055   43353 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1014 14:29:24.296063   43353 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1014 14:29:24.296067   43353 command_runner.go:130] > # seccomp_profile = ""
	I1014 14:29:24.296074   43353 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1014 14:29:24.296080   43353 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1014 14:29:24.296087   43353 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1014 14:29:24.296092   43353 command_runner.go:130] > # which might increase security.
	I1014 14:29:24.296100   43353 command_runner.go:130] > # This option is currently deprecated,
	I1014 14:29:24.296115   43353 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1014 14:29:24.296121   43353 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1014 14:29:24.296133   43353 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1014 14:29:24.296143   43353 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1014 14:29:24.296152   43353 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1014 14:29:24.296158   43353 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1014 14:29:24.296165   43353 command_runner.go:130] > # This option supports live configuration reload.
	I1014 14:29:24.296170   43353 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1014 14:29:24.296175   43353 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1014 14:29:24.296181   43353 command_runner.go:130] > # the cgroup blockio controller.
	I1014 14:29:24.296190   43353 command_runner.go:130] > # blockio_config_file = ""
	I1014 14:29:24.296199   43353 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1014 14:29:24.296204   43353 command_runner.go:130] > # blockio parameters.
	I1014 14:29:24.296208   43353 command_runner.go:130] > # blockio_reload = false
	I1014 14:29:24.296216   43353 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1014 14:29:24.296221   43353 command_runner.go:130] > # irqbalance daemon.
	I1014 14:29:24.296226   43353 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1014 14:29:24.296234   43353 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1014 14:29:24.296242   43353 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1014 14:29:24.296249   43353 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1014 14:29:24.296254   43353 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1014 14:29:24.296262   43353 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1014 14:29:24.296269   43353 command_runner.go:130] > # This option supports live configuration reload.
	I1014 14:29:24.296273   43353 command_runner.go:130] > # rdt_config_file = ""
	I1014 14:29:24.296280   43353 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1014 14:29:24.296284   43353 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1014 14:29:24.296314   43353 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1014 14:29:24.296321   43353 command_runner.go:130] > # separate_pull_cgroup = ""
	I1014 14:29:24.296327   43353 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1014 14:29:24.296332   43353 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1014 14:29:24.296336   43353 command_runner.go:130] > # will be added.
	I1014 14:29:24.296340   43353 command_runner.go:130] > # default_capabilities = [
	I1014 14:29:24.296345   43353 command_runner.go:130] > # 	"CHOWN",
	I1014 14:29:24.296348   43353 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1014 14:29:24.296354   43353 command_runner.go:130] > # 	"FSETID",
	I1014 14:29:24.296362   43353 command_runner.go:130] > # 	"FOWNER",
	I1014 14:29:24.296370   43353 command_runner.go:130] > # 	"SETGID",
	I1014 14:29:24.296378   43353 command_runner.go:130] > # 	"SETUID",
	I1014 14:29:24.296392   43353 command_runner.go:130] > # 	"SETPCAP",
	I1014 14:29:24.296401   43353 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1014 14:29:24.296409   43353 command_runner.go:130] > # 	"KILL",
	I1014 14:29:24.296415   43353 command_runner.go:130] > # ]
	I1014 14:29:24.296428   43353 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1014 14:29:24.296440   43353 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1014 14:29:24.296451   43353 command_runner.go:130] > # add_inheritable_capabilities = false
	I1014 14:29:24.296463   43353 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1014 14:29:24.296473   43353 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1014 14:29:24.296479   43353 command_runner.go:130] > default_sysctls = [
	I1014 14:29:24.296484   43353 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1014 14:29:24.296489   43353 command_runner.go:130] > ]
	I1014 14:29:24.296494   43353 command_runner.go:130] > # List of devices on the host that a
	I1014 14:29:24.296505   43353 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1014 14:29:24.296514   43353 command_runner.go:130] > # allowed_devices = [
	I1014 14:29:24.296521   43353 command_runner.go:130] > # 	"/dev/fuse",
	I1014 14:29:24.296528   43353 command_runner.go:130] > # ]
	I1014 14:29:24.296536   43353 command_runner.go:130] > # List of additional devices. specified as
	I1014 14:29:24.296550   43353 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1014 14:29:24.296560   43353 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1014 14:29:24.296572   43353 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1014 14:29:24.296582   43353 command_runner.go:130] > # additional_devices = [
	I1014 14:29:24.296589   43353 command_runner.go:130] > # ]
	I1014 14:29:24.296597   43353 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1014 14:29:24.296605   43353 command_runner.go:130] > # cdi_spec_dirs = [
	I1014 14:29:24.296612   43353 command_runner.go:130] > # 	"/etc/cdi",
	I1014 14:29:24.296617   43353 command_runner.go:130] > # 	"/var/run/cdi",
	I1014 14:29:24.296624   43353 command_runner.go:130] > # ]
	I1014 14:29:24.296634   43353 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1014 14:29:24.296646   43353 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1014 14:29:24.296662   43353 command_runner.go:130] > # Defaults to false.
	I1014 14:29:24.296673   43353 command_runner.go:130] > # device_ownership_from_security_context = false
	I1014 14:29:24.296686   43353 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1014 14:29:24.296698   43353 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1014 14:29:24.296707   43353 command_runner.go:130] > # hooks_dir = [
	I1014 14:29:24.296716   43353 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1014 14:29:24.296722   43353 command_runner.go:130] > # ]
	I1014 14:29:24.296734   43353 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1014 14:29:24.296746   43353 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1014 14:29:24.296757   43353 command_runner.go:130] > # its default mounts from the following two files:
	I1014 14:29:24.296762   43353 command_runner.go:130] > #
	I1014 14:29:24.296773   43353 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1014 14:29:24.296786   43353 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1014 14:29:24.296796   43353 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1014 14:29:24.296804   43353 command_runner.go:130] > #
	I1014 14:29:24.296812   43353 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1014 14:29:24.296824   43353 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1014 14:29:24.296836   43353 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1014 14:29:24.296847   43353 command_runner.go:130] > #      only add mounts it finds in this file.
	I1014 14:29:24.296852   43353 command_runner.go:130] > #
	I1014 14:29:24.296861   43353 command_runner.go:130] > # default_mounts_file = ""
	I1014 14:29:24.296869   43353 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1014 14:29:24.296881   43353 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1014 14:29:24.296890   43353 command_runner.go:130] > pids_limit = 1024
	I1014 14:29:24.296902   43353 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1014 14:29:24.296913   43353 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1014 14:29:24.296925   43353 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1014 14:29:24.296940   43353 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1014 14:29:24.296949   43353 command_runner.go:130] > # log_size_max = -1
	I1014 14:29:24.296962   43353 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1014 14:29:24.296971   43353 command_runner.go:130] > # log_to_journald = false
	I1014 14:29:24.296982   43353 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1014 14:29:24.296993   43353 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1014 14:29:24.297011   43353 command_runner.go:130] > # Path to directory for container attach sockets.
	I1014 14:29:24.297021   43353 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1014 14:29:24.297030   43353 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1014 14:29:24.297039   43353 command_runner.go:130] > # bind_mount_prefix = ""
	I1014 14:29:24.297047   43353 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1014 14:29:24.297055   43353 command_runner.go:130] > # read_only = false
	I1014 14:29:24.297064   43353 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1014 14:29:24.297076   43353 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1014 14:29:24.297085   43353 command_runner.go:130] > # live configuration reload.
	I1014 14:29:24.297091   43353 command_runner.go:130] > # log_level = "info"
	I1014 14:29:24.297102   43353 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1014 14:29:24.297112   43353 command_runner.go:130] > # This option supports live configuration reload.
	I1014 14:29:24.297120   43353 command_runner.go:130] > # log_filter = ""
	I1014 14:29:24.297132   43353 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1014 14:29:24.297145   43353 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1014 14:29:24.297154   43353 command_runner.go:130] > # separated by comma.
	I1014 14:29:24.297168   43353 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1014 14:29:24.297177   43353 command_runner.go:130] > # uid_mappings = ""
	I1014 14:29:24.297188   43353 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1014 14:29:24.297198   43353 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1014 14:29:24.297207   43353 command_runner.go:130] > # separated by comma.
	I1014 14:29:24.297217   43353 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1014 14:29:24.297224   43353 command_runner.go:130] > # gid_mappings = ""
	I1014 14:29:24.297229   43353 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1014 14:29:24.297237   43353 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1014 14:29:24.297245   43353 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1014 14:29:24.297251   43353 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1014 14:29:24.297258   43353 command_runner.go:130] > # minimum_mappable_uid = -1
	I1014 14:29:24.297263   43353 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1014 14:29:24.297271   43353 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1014 14:29:24.297277   43353 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1014 14:29:24.297285   43353 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1014 14:29:24.297292   43353 command_runner.go:130] > # minimum_mappable_gid = -1
	I1014 14:29:24.297303   43353 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1014 14:29:24.297311   43353 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1014 14:29:24.297318   43353 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1014 14:29:24.297322   43353 command_runner.go:130] > # ctr_stop_timeout = 30
	I1014 14:29:24.297328   43353 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1014 14:29:24.297336   43353 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1014 14:29:24.297340   43353 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1014 14:29:24.297347   43353 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1014 14:29:24.297351   43353 command_runner.go:130] > drop_infra_ctr = false
	I1014 14:29:24.297359   43353 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1014 14:29:24.297364   43353 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1014 14:29:24.297373   43353 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1014 14:29:24.297379   43353 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1014 14:29:24.297385   43353 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1014 14:29:24.297397   43353 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1014 14:29:24.297402   43353 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1014 14:29:24.297409   43353 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1014 14:29:24.297413   43353 command_runner.go:130] > # shared_cpuset = ""
	I1014 14:29:24.297421   43353 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1014 14:29:24.297426   43353 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1014 14:29:24.297432   43353 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1014 14:29:24.297439   43353 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1014 14:29:24.297445   43353 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1014 14:29:24.297450   43353 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1014 14:29:24.297457   43353 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1014 14:29:24.297463   43353 command_runner.go:130] > # enable_criu_support = false
	I1014 14:29:24.297469   43353 command_runner.go:130] > # Enable/disable the generation of the container,
	I1014 14:29:24.297479   43353 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1014 14:29:24.297485   43353 command_runner.go:130] > # enable_pod_events = false
	I1014 14:29:24.297494   43353 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1014 14:29:24.297502   43353 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1014 14:29:24.297507   43353 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1014 14:29:24.297513   43353 command_runner.go:130] > # default_runtime = "runc"
	I1014 14:29:24.297522   43353 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1014 14:29:24.297531   43353 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1014 14:29:24.297540   43353 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1014 14:29:24.297547   43353 command_runner.go:130] > # creation as a file is not desired either.
	I1014 14:29:24.297555   43353 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1014 14:29:24.297568   43353 command_runner.go:130] > # the hostname is being managed dynamically.
	I1014 14:29:24.297575   43353 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1014 14:29:24.297578   43353 command_runner.go:130] > # ]
	I1014 14:29:24.297585   43353 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1014 14:29:24.297592   43353 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1014 14:29:24.297599   43353 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1014 14:29:24.297607   43353 command_runner.go:130] > # Each entry in the table should follow the format:
	I1014 14:29:24.297610   43353 command_runner.go:130] > #
	I1014 14:29:24.297615   43353 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1014 14:29:24.297621   43353 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1014 14:29:24.297664   43353 command_runner.go:130] > # runtime_type = "oci"
	I1014 14:29:24.297670   43353 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1014 14:29:24.297675   43353 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1014 14:29:24.297681   43353 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1014 14:29:24.297686   43353 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1014 14:29:24.297691   43353 command_runner.go:130] > # monitor_env = []
	I1014 14:29:24.297696   43353 command_runner.go:130] > # privileged_without_host_devices = false
	I1014 14:29:24.297700   43353 command_runner.go:130] > # allowed_annotations = []
	I1014 14:29:24.297707   43353 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1014 14:29:24.297712   43353 command_runner.go:130] > # Where:
	I1014 14:29:24.297723   43353 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1014 14:29:24.297732   43353 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1014 14:29:24.297744   43353 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1014 14:29:24.297753   43353 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1014 14:29:24.297761   43353 command_runner.go:130] > #   in $PATH.
	I1014 14:29:24.297771   43353 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1014 14:29:24.297781   43353 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1014 14:29:24.297790   43353 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1014 14:29:24.297804   43353 command_runner.go:130] > #   state.
	I1014 14:29:24.297814   43353 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1014 14:29:24.297822   43353 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1014 14:29:24.297828   43353 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1014 14:29:24.297835   43353 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1014 14:29:24.297841   43353 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1014 14:29:24.297849   43353 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1014 14:29:24.297854   43353 command_runner.go:130] > #   The currently recognized values are:
	I1014 14:29:24.297862   43353 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1014 14:29:24.297869   43353 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1014 14:29:24.297877   43353 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1014 14:29:24.297885   43353 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1014 14:29:24.297894   43353 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1014 14:29:24.297902   43353 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1014 14:29:24.297909   43353 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1014 14:29:24.297917   43353 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1014 14:29:24.297926   43353 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1014 14:29:24.297932   43353 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1014 14:29:24.297938   43353 command_runner.go:130] > #   deprecated option "conmon".
	I1014 14:29:24.297944   43353 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1014 14:29:24.297951   43353 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1014 14:29:24.297957   43353 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1014 14:29:24.297964   43353 command_runner.go:130] > #   should be moved to the container's cgroup
	I1014 14:29:24.297970   43353 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1014 14:29:24.297977   43353 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1014 14:29:24.297983   43353 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1014 14:29:24.297989   43353 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1014 14:29:24.297993   43353 command_runner.go:130] > #
	I1014 14:29:24.297999   43353 command_runner.go:130] > # Using the seccomp notifier feature:
	I1014 14:29:24.298003   43353 command_runner.go:130] > #
	I1014 14:29:24.298011   43353 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1014 14:29:24.298019   43353 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1014 14:29:24.298024   43353 command_runner.go:130] > #
	I1014 14:29:24.298034   43353 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1014 14:29:24.298042   43353 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1014 14:29:24.298048   43353 command_runner.go:130] > #
	I1014 14:29:24.298054   43353 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1014 14:29:24.298059   43353 command_runner.go:130] > # feature.
	I1014 14:29:24.298063   43353 command_runner.go:130] > #
	I1014 14:29:24.298070   43353 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1014 14:29:24.298076   43353 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1014 14:29:24.298084   43353 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1014 14:29:24.298092   43353 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1014 14:29:24.298100   43353 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1014 14:29:24.298103   43353 command_runner.go:130] > #
	I1014 14:29:24.298110   43353 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1014 14:29:24.298116   43353 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1014 14:29:24.298121   43353 command_runner.go:130] > #
	I1014 14:29:24.298127   43353 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1014 14:29:24.298134   43353 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1014 14:29:24.298141   43353 command_runner.go:130] > #
	I1014 14:29:24.298146   43353 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1014 14:29:24.298154   43353 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1014 14:29:24.298160   43353 command_runner.go:130] > # limitation.
	I1014 14:29:24.298165   43353 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1014 14:29:24.298171   43353 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1014 14:29:24.298175   43353 command_runner.go:130] > runtime_type = "oci"
	I1014 14:29:24.298182   43353 command_runner.go:130] > runtime_root = "/run/runc"
	I1014 14:29:24.298186   43353 command_runner.go:130] > runtime_config_path = ""
	I1014 14:29:24.298192   43353 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1014 14:29:24.298196   43353 command_runner.go:130] > monitor_cgroup = "pod"
	I1014 14:29:24.298200   43353 command_runner.go:130] > monitor_exec_cgroup = ""
	I1014 14:29:24.298211   43353 command_runner.go:130] > monitor_env = [
	I1014 14:29:24.298219   43353 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1014 14:29:24.298223   43353 command_runner.go:130] > ]
	I1014 14:29:24.298227   43353 command_runner.go:130] > privileged_without_host_devices = false
	I1014 14:29:24.298323   43353 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1014 14:29:24.298643   43353 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1014 14:29:24.298665   43353 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1014 14:29:24.298679   43353 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1014 14:29:24.298699   43353 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1014 14:29:24.298709   43353 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1014 14:29:24.298732   43353 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1014 14:29:24.298751   43353 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1014 14:29:24.298761   43353 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1014 14:29:24.298773   43353 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1014 14:29:24.298784   43353 command_runner.go:130] > # Example:
	I1014 14:29:24.298791   43353 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1014 14:29:24.298799   43353 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1014 14:29:24.298807   43353 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1014 14:29:24.298821   43353 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1014 14:29:24.298826   43353 command_runner.go:130] > # cpuset = 0
	I1014 14:29:24.298833   43353 command_runner.go:130] > # cpushares = "0-1"
	I1014 14:29:24.298837   43353 command_runner.go:130] > # Where:
	I1014 14:29:24.298845   43353 command_runner.go:130] > # The workload name is workload-type.
	I1014 14:29:24.298862   43353 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1014 14:29:24.298870   43353 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1014 14:29:24.298879   43353 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1014 14:29:24.298898   43353 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1014 14:29:24.298911   43353 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1014 14:29:24.298925   43353 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1014 14:29:24.298937   43353 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1014 14:29:24.298948   43353 command_runner.go:130] > # Default value is set to true
	I1014 14:29:24.298959   43353 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1014 14:29:24.298973   43353 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1014 14:29:24.298983   43353 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1014 14:29:24.298990   43353 command_runner.go:130] > # Default value is set to 'false'
	I1014 14:29:24.298999   43353 command_runner.go:130] > # disable_hostport_mapping = false
	I1014 14:29:24.299014   43353 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1014 14:29:24.299036   43353 command_runner.go:130] > #
	I1014 14:29:24.299049   43353 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1014 14:29:24.299067   43353 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1014 14:29:24.299079   43353 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1014 14:29:24.299091   43353 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1014 14:29:24.299105   43353 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1014 14:29:24.299111   43353 command_runner.go:130] > [crio.image]
	I1014 14:29:24.299121   43353 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1014 14:29:24.299136   43353 command_runner.go:130] > # default_transport = "docker://"
	I1014 14:29:24.299148   43353 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1014 14:29:24.299161   43353 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1014 14:29:24.299176   43353 command_runner.go:130] > # global_auth_file = ""
	I1014 14:29:24.299186   43353 command_runner.go:130] > # The image used to instantiate infra containers.
	I1014 14:29:24.299195   43353 command_runner.go:130] > # This option supports live configuration reload.
	I1014 14:29:24.299203   43353 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1014 14:29:24.299217   43353 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1014 14:29:24.299228   43353 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1014 14:29:24.299239   43353 command_runner.go:130] > # This option supports live configuration reload.
	I1014 14:29:24.299250   43353 command_runner.go:130] > # pause_image_auth_file = ""
	I1014 14:29:24.299261   43353 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1014 14:29:24.299273   43353 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1014 14:29:24.299287   43353 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1014 14:29:24.299296   43353 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1014 14:29:24.299306   43353 command_runner.go:130] > # pause_command = "/pause"
	I1014 14:29:24.299320   43353 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1014 14:29:24.299331   43353 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1014 14:29:24.299343   43353 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1014 14:29:24.299366   43353 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1014 14:29:24.299376   43353 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1014 14:29:24.299396   43353 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1014 14:29:24.299408   43353 command_runner.go:130] > # pinned_images = [
	I1014 14:29:24.299426   43353 command_runner.go:130] > # ]
	I1014 14:29:24.299438   43353 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1014 14:29:24.299465   43353 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1014 14:29:24.299475   43353 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1014 14:29:24.299487   43353 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1014 14:29:24.299503   43353 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1014 14:29:24.299512   43353 command_runner.go:130] > # signature_policy = ""
	I1014 14:29:24.299521   43353 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1014 14:29:24.299539   43353 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1014 14:29:24.299555   43353 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1014 14:29:24.299565   43353 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1014 14:29:24.299580   43353 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1014 14:29:24.299592   43353 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1014 14:29:24.299604   43353 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1014 14:29:24.299621   43353 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1014 14:29:24.299636   43353 command_runner.go:130] > # changing them here.
	I1014 14:29:24.299659   43353 command_runner.go:130] > # insecure_registries = [
	I1014 14:29:24.299666   43353 command_runner.go:130] > # ]
	I1014 14:29:24.299679   43353 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1014 14:29:24.299695   43353 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1014 14:29:24.299702   43353 command_runner.go:130] > # image_volumes = "mkdir"
	I1014 14:29:24.299710   43353 command_runner.go:130] > # Temporary directory to use for storing big files
	I1014 14:29:24.299717   43353 command_runner.go:130] > # big_files_temporary_dir = ""
	I1014 14:29:24.299731   43353 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1014 14:29:24.299737   43353 command_runner.go:130] > # CNI plugins.
	I1014 14:29:24.299743   43353 command_runner.go:130] > [crio.network]
	I1014 14:29:24.299751   43353 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1014 14:29:24.299764   43353 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1014 14:29:24.299771   43353 command_runner.go:130] > # cni_default_network = ""
	I1014 14:29:24.299780   43353 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1014 14:29:24.299859   43353 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1014 14:29:24.299915   43353 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1014 14:29:24.300234   43353 command_runner.go:130] > # plugin_dirs = [
	I1014 14:29:24.300247   43353 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1014 14:29:24.300252   43353 command_runner.go:130] > # ]
	I1014 14:29:24.300261   43353 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1014 14:29:24.300267   43353 command_runner.go:130] > [crio.metrics]
	I1014 14:29:24.300276   43353 command_runner.go:130] > # Globally enable or disable metrics support.
	I1014 14:29:24.300286   43353 command_runner.go:130] > enable_metrics = true
	I1014 14:29:24.300296   43353 command_runner.go:130] > # Specify enabled metrics collectors.
	I1014 14:29:24.300306   43353 command_runner.go:130] > # Per default all metrics are enabled.
	I1014 14:29:24.300315   43353 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1014 14:29:24.300327   43353 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1014 14:29:24.300338   43353 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1014 14:29:24.300347   43353 command_runner.go:130] > # metrics_collectors = [
	I1014 14:29:24.300353   43353 command_runner.go:130] > # 	"operations",
	I1014 14:29:24.300363   43353 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1014 14:29:24.300370   43353 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1014 14:29:24.300380   43353 command_runner.go:130] > # 	"operations_errors",
	I1014 14:29:24.300387   43353 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1014 14:29:24.300396   43353 command_runner.go:130] > # 	"image_pulls_by_name",
	I1014 14:29:24.300403   43353 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1014 14:29:24.300426   43353 command_runner.go:130] > # 	"image_pulls_failures",
	I1014 14:29:24.300437   43353 command_runner.go:130] > # 	"image_pulls_successes",
	I1014 14:29:24.300443   43353 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1014 14:29:24.300449   43353 command_runner.go:130] > # 	"image_layer_reuse",
	I1014 14:29:24.300459   43353 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1014 14:29:24.300465   43353 command_runner.go:130] > # 	"containers_oom_total",
	I1014 14:29:24.300474   43353 command_runner.go:130] > # 	"containers_oom",
	I1014 14:29:24.300480   43353 command_runner.go:130] > # 	"processes_defunct",
	I1014 14:29:24.300489   43353 command_runner.go:130] > # 	"operations_total",
	I1014 14:29:24.300496   43353 command_runner.go:130] > # 	"operations_latency_seconds",
	I1014 14:29:24.300505   43353 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1014 14:29:24.300512   43353 command_runner.go:130] > # 	"operations_errors_total",
	I1014 14:29:24.300521   43353 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1014 14:29:24.300531   43353 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1014 14:29:24.300539   43353 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1014 14:29:24.300548   43353 command_runner.go:130] > # 	"image_pulls_success_total",
	I1014 14:29:24.300556   43353 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1014 14:29:24.300565   43353 command_runner.go:130] > # 	"containers_oom_count_total",
	I1014 14:29:24.300575   43353 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1014 14:29:24.300585   43353 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1014 14:29:24.300590   43353 command_runner.go:130] > # ]
	I1014 14:29:24.300600   43353 command_runner.go:130] > # The port on which the metrics server will listen.
	I1014 14:29:24.300606   43353 command_runner.go:130] > # metrics_port = 9090
	I1014 14:29:24.300617   43353 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1014 14:29:24.300627   43353 command_runner.go:130] > # metrics_socket = ""
	I1014 14:29:24.300634   43353 command_runner.go:130] > # The certificate for the secure metrics server.
	I1014 14:29:24.300646   43353 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1014 14:29:24.300656   43353 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1014 14:29:24.300665   43353 command_runner.go:130] > # certificate on any modification event.
	I1014 14:29:24.300676   43353 command_runner.go:130] > # metrics_cert = ""
	I1014 14:29:24.300685   43353 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1014 14:29:24.300695   43353 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1014 14:29:24.300705   43353 command_runner.go:130] > # metrics_key = ""
	I1014 14:29:24.300713   43353 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1014 14:29:24.300721   43353 command_runner.go:130] > [crio.tracing]
	I1014 14:29:24.300730   43353 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1014 14:29:24.300740   43353 command_runner.go:130] > # enable_tracing = false
	I1014 14:29:24.300748   43353 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1014 14:29:24.300755   43353 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1014 14:29:24.300767   43353 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1014 14:29:24.300774   43353 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1014 14:29:24.300781   43353 command_runner.go:130] > # CRI-O NRI configuration.
	I1014 14:29:24.300789   43353 command_runner.go:130] > [crio.nri]
	I1014 14:29:24.300798   43353 command_runner.go:130] > # Globally enable or disable NRI.
	I1014 14:29:24.300806   43353 command_runner.go:130] > # enable_nri = false
	I1014 14:29:24.300813   43353 command_runner.go:130] > # NRI socket to listen on.
	I1014 14:29:24.300823   43353 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1014 14:29:24.300829   43353 command_runner.go:130] > # NRI plugin directory to use.
	I1014 14:29:24.300836   43353 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1014 14:29:24.300847   43353 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1014 14:29:24.300858   43353 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1014 14:29:24.300869   43353 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1014 14:29:24.300879   43353 command_runner.go:130] > # nri_disable_connections = false
	I1014 14:29:24.300890   43353 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1014 14:29:24.300904   43353 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1014 14:29:24.300922   43353 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1014 14:29:24.300936   43353 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1014 14:29:24.300949   43353 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1014 14:29:24.300955   43353 command_runner.go:130] > [crio.stats]
	I1014 14:29:24.300967   43353 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1014 14:29:24.300978   43353 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1014 14:29:24.300987   43353 command_runner.go:130] > # stats_collection_period = 0
	I1014 14:29:24.301068   43353 cni.go:84] Creating CNI manager for ""
	I1014 14:29:24.301079   43353 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1014 14:29:24.301088   43353 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 14:29:24.301106   43353 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.46 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-740856 NodeName:multinode-740856 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.46"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.46 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 14:29:24.301237   43353 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.46
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-740856"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.46"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.46"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 14:29:24.301290   43353 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 14:29:24.312738   43353 command_runner.go:130] > kubeadm
	I1014 14:29:24.312753   43353 command_runner.go:130] > kubectl
	I1014 14:29:24.312757   43353 command_runner.go:130] > kubelet
	I1014 14:29:24.312908   43353 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 14:29:24.312959   43353 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 14:29:24.322794   43353 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I1014 14:29:24.341613   43353 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 14:29:24.360060   43353 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1014 14:29:24.378483   43353 ssh_runner.go:195] Run: grep 192.168.39.46	control-plane.minikube.internal$ /etc/hosts
	I1014 14:29:24.382665   43353 command_runner.go:130] > 192.168.39.46	control-plane.minikube.internal
	I1014 14:29:24.382734   43353 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 14:29:24.532787   43353 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 14:29:24.547537   43353 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/multinode-740856 for IP: 192.168.39.46
	I1014 14:29:24.547566   43353 certs.go:194] generating shared ca certs ...
	I1014 14:29:24.547583   43353 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 14:29:24.547851   43353 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 14:29:24.547917   43353 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 14:29:24.547929   43353 certs.go:256] generating profile certs ...
	I1014 14:29:24.548007   43353 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/multinode-740856/client.key
	I1014 14:29:24.548070   43353 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/multinode-740856/apiserver.key.eae55c26
	I1014 14:29:24.548133   43353 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/multinode-740856/proxy-client.key
	I1014 14:29:24.548145   43353 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 14:29:24.548162   43353 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 14:29:24.548173   43353 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 14:29:24.548184   43353 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 14:29:24.548195   43353 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/multinode-740856/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 14:29:24.548207   43353 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/multinode-740856/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 14:29:24.548217   43353 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/multinode-740856/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 14:29:24.548229   43353 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/multinode-740856/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 14:29:24.548279   43353 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 14:29:24.548309   43353 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 14:29:24.548322   43353 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 14:29:24.548346   43353 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 14:29:24.548367   43353 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 14:29:24.548387   43353 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 14:29:24.548422   43353 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 14:29:24.548448   43353 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 14:29:24.548460   43353 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem -> /usr/share/ca-certificates/15023.pem
	I1014 14:29:24.548472   43353 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> /usr/share/ca-certificates/150232.pem
	I1014 14:29:24.549009   43353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 14:29:24.573312   43353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 14:29:24.596589   43353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 14:29:24.620150   43353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 14:29:24.643170   43353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/multinode-740856/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 14:29:24.666964   43353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/multinode-740856/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 14:29:24.690231   43353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/multinode-740856/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 14:29:24.713810   43353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/multinode-740856/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 14:29:24.737078   43353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 14:29:24.760628   43353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 14:29:24.784289   43353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 14:29:24.807254   43353 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 14:29:24.823857   43353 ssh_runner.go:195] Run: openssl version
	I1014 14:29:24.829673   43353 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1014 14:29:24.829742   43353 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 14:29:24.840383   43353 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 14:29:24.844886   43353 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 14:29:24.844911   43353 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 14:29:24.844942   43353 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 14:29:24.850488   43353 command_runner.go:130] > 3ec20f2e
	I1014 14:29:24.850543   43353 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 14:29:24.859651   43353 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 14:29:24.870032   43353 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 14:29:24.874453   43353 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 14:29:24.874475   43353 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 14:29:24.874507   43353 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 14:29:24.880084   43353 command_runner.go:130] > b5213941
	I1014 14:29:24.880137   43353 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 14:29:24.889823   43353 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 14:29:24.900169   43353 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 14:29:24.904582   43353 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 14:29:24.904659   43353 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 14:29:24.904706   43353 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 14:29:24.911484   43353 command_runner.go:130] > 51391683
	I1014 14:29:24.911548   43353 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 14:29:24.921782   43353 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 14:29:24.927170   43353 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 14:29:24.927194   43353 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1014 14:29:24.927201   43353 command_runner.go:130] > Device: 253,1	Inode: 8384040     Links: 1
	I1014 14:29:24.927211   43353 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1014 14:29:24.927237   43353 command_runner.go:130] > Access: 2024-10-14 14:22:44.982122401 +0000
	I1014 14:29:24.927245   43353 command_runner.go:130] > Modify: 2024-10-14 14:22:44.982122401 +0000
	I1014 14:29:24.927256   43353 command_runner.go:130] > Change: 2024-10-14 14:22:44.982122401 +0000
	I1014 14:29:24.927264   43353 command_runner.go:130] >  Birth: 2024-10-14 14:22:44.982122401 +0000
	I1014 14:29:24.927319   43353 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 14:29:24.933342   43353 command_runner.go:130] > Certificate will not expire
	I1014 14:29:24.933544   43353 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 14:29:24.939544   43353 command_runner.go:130] > Certificate will not expire
	I1014 14:29:24.939742   43353 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 14:29:24.945533   43353 command_runner.go:130] > Certificate will not expire
	I1014 14:29:24.945739   43353 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 14:29:24.951529   43353 command_runner.go:130] > Certificate will not expire
	I1014 14:29:24.951582   43353 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 14:29:24.957381   43353 command_runner.go:130] > Certificate will not expire
	I1014 14:29:24.957434   43353 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 14:29:24.962769   43353 command_runner.go:130] > Certificate will not expire
	I1014 14:29:24.963024   43353 kubeadm.go:392] StartCluster: {Name:multinode-740856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-740856 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.46 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.81 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.82 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 14:29:24.963140   43353 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 14:29:24.963190   43353 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 14:29:25.001844   43353 command_runner.go:130] > 7e72e1d43f5366c2f70993fa3a08a337f4b8c801487f277583a88f020cc11c10
	I1014 14:29:25.001866   43353 command_runner.go:130] > 4019b738ca4f28861ffdb40677a8c4ce2550b9baebbe4a973d64492a1dcfbff7
	I1014 14:29:25.001875   43353 command_runner.go:130] > 87470ac4eaca438897add9e375dddcb650dbd05770f45d94edbf259e60468db6
	I1014 14:29:25.001909   43353 command_runner.go:130] > 4da188fe9b0f370d054f6beffd8587a8a9fdd4d0c2db1539e2e595a5d4b9c871
	I1014 14:29:25.001985   43353 command_runner.go:130] > 1bfc55554fe6c8a963f799027a265d188484388cf15d0b2b3bcc52c6c7cf7095
	I1014 14:29:25.002017   43353 command_runner.go:130] > 4bbdb50a55e79de8cf1540f5fbfb9948a264aa06008c04a0626f7e3d01673693
	I1014 14:29:25.002104   43353 command_runner.go:130] > f6820157a4a338a4a4df260165d393247a734ce3fbdd5e6b2eb87de2723c7f8a
	I1014 14:29:25.002166   43353 command_runner.go:130] > 377bf132ef7fe09e3d871ef952d1b4b9127a4d4f7f85dc193eaa78062b662ab0
	I1014 14:29:25.003622   43353 cri.go:89] found id: "7e72e1d43f5366c2f70993fa3a08a337f4b8c801487f277583a88f020cc11c10"
	I1014 14:29:25.003639   43353 cri.go:89] found id: "4019b738ca4f28861ffdb40677a8c4ce2550b9baebbe4a973d64492a1dcfbff7"
	I1014 14:29:25.003642   43353 cri.go:89] found id: "87470ac4eaca438897add9e375dddcb650dbd05770f45d94edbf259e60468db6"
	I1014 14:29:25.003646   43353 cri.go:89] found id: "4da188fe9b0f370d054f6beffd8587a8a9fdd4d0c2db1539e2e595a5d4b9c871"
	I1014 14:29:25.003648   43353 cri.go:89] found id: "1bfc55554fe6c8a963f799027a265d188484388cf15d0b2b3bcc52c6c7cf7095"
	I1014 14:29:25.003652   43353 cri.go:89] found id: "4bbdb50a55e79de8cf1540f5fbfb9948a264aa06008c04a0626f7e3d01673693"
	I1014 14:29:25.003654   43353 cri.go:89] found id: "f6820157a4a338a4a4df260165d393247a734ce3fbdd5e6b2eb87de2723c7f8a"
	I1014 14:29:25.003657   43353 cri.go:89] found id: "377bf132ef7fe09e3d871ef952d1b4b9127a4d4f7f85dc193eaa78062b662ab0"
	I1014 14:29:25.003659   43353 cri.go:89] found id: ""
	I1014 14:29:25.003699   43353 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-740856 -n multinode-740856
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-740856 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (326.19s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (145.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 stop
E1014 14:31:40.069557   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/functional-917108/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-740856 stop: exit status 82 (2m0.458582349s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-740856-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-740856 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 status
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-740856 status: (18.680211613s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 status --alsologtostderr
E1014 14:33:36.994902   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/functional-917108/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-740856 status --alsologtostderr: (3.359966175s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-740856 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-740856 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-740856 -n multinode-740856
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-740856 logs -n 25: (2.136019419s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-740856 ssh -n                                                                 | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | multinode-740856-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-740856 cp multinode-740856-m02:/home/docker/cp-test.txt                       | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | multinode-740856:/home/docker/cp-test_multinode-740856-m02_multinode-740856.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-740856 ssh -n                                                                 | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | multinode-740856-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-740856 ssh -n multinode-740856 sudo cat                                       | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | /home/docker/cp-test_multinode-740856-m02_multinode-740856.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-740856 cp multinode-740856-m02:/home/docker/cp-test.txt                       | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | multinode-740856-m03:/home/docker/cp-test_multinode-740856-m02_multinode-740856-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-740856 ssh -n                                                                 | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | multinode-740856-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-740856 ssh -n multinode-740856-m03 sudo cat                                   | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | /home/docker/cp-test_multinode-740856-m02_multinode-740856-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-740856 cp testdata/cp-test.txt                                                | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | multinode-740856-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-740856 ssh -n                                                                 | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | multinode-740856-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-740856 cp multinode-740856-m03:/home/docker/cp-test.txt                       | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1440328619/001/cp-test_multinode-740856-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-740856 ssh -n                                                                 | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | multinode-740856-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-740856 cp multinode-740856-m03:/home/docker/cp-test.txt                       | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | multinode-740856:/home/docker/cp-test_multinode-740856-m03_multinode-740856.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-740856 ssh -n                                                                 | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | multinode-740856-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-740856 ssh -n multinode-740856 sudo cat                                       | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | /home/docker/cp-test_multinode-740856-m03_multinode-740856.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-740856 cp multinode-740856-m03:/home/docker/cp-test.txt                       | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | multinode-740856-m02:/home/docker/cp-test_multinode-740856-m03_multinode-740856-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-740856 ssh -n                                                                 | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | multinode-740856-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-740856 ssh -n multinode-740856-m02 sudo cat                                   | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | /home/docker/cp-test_multinode-740856-m03_multinode-740856-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-740856 node stop m03                                                          | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	| node    | multinode-740856 node start                                                             | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-740856                                                                | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC |                     |
	| stop    | -p multinode-740856                                                                     | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC |                     |
	| start   | -p multinode-740856                                                                     | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:27 UTC | 14 Oct 24 14:31 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-740856                                                                | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:31 UTC |                     |
	| node    | multinode-740856 node delete                                                            | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:31 UTC | 14 Oct 24 14:31 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-740856 stop                                                                   | multinode-740856 | jenkins | v1.34.0 | 14 Oct 24 14:31 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 14:27:49
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 14:27:49.143445   43353 out.go:345] Setting OutFile to fd 1 ...
	I1014 14:27:49.143698   43353 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:27:49.143707   43353 out.go:358] Setting ErrFile to fd 2...
	I1014 14:27:49.143712   43353 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:27:49.143874   43353 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
	I1014 14:27:49.144386   43353 out.go:352] Setting JSON to false
	I1014 14:27:49.145217   43353 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4219,"bootTime":1728911850,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 14:27:49.145315   43353 start.go:139] virtualization: kvm guest
	I1014 14:27:49.147828   43353 out.go:177] * [multinode-740856] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1014 14:27:49.149302   43353 notify.go:220] Checking for updates...
	I1014 14:27:49.149336   43353 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 14:27:49.150946   43353 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 14:27:49.152546   43353 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 14:27:49.153988   43353 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 14:27:49.155285   43353 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 14:27:49.156564   43353 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 14:27:49.158222   43353 config.go:182] Loaded profile config "multinode-740856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 14:27:49.158301   43353 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 14:27:49.158747   43353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:27:49.158817   43353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:27:49.173925   43353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39965
	I1014 14:27:49.174428   43353 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:27:49.175038   43353 main.go:141] libmachine: Using API Version  1
	I1014 14:27:49.175067   43353 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:27:49.175376   43353 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:27:49.175586   43353 main.go:141] libmachine: (multinode-740856) Calling .DriverName
	I1014 14:27:49.210516   43353 out.go:177] * Using the kvm2 driver based on existing profile
	I1014 14:27:49.211623   43353 start.go:297] selected driver: kvm2
	I1014 14:27:49.211635   43353 start.go:901] validating driver "kvm2" against &{Name:multinode-740856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:multinode-740856 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.46 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.81 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.82 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fa
lse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 14:27:49.211753   43353 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 14:27:49.212070   43353 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 14:27:49.212132   43353 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19790-7836/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 14:27:49.226728   43353 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1014 14:27:49.227362   43353 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 14:27:49.227400   43353 cni.go:84] Creating CNI manager for ""
	I1014 14:27:49.227448   43353 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1014 14:27:49.227500   43353 start.go:340] cluster config:
	{Name:multinode-740856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-740856 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.46 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.81 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.82 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner
:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 14:27:49.227638   43353 iso.go:125] acquiring lock: {Name:mk2e2e780b05ead4007a93e6b56d28c6081926bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 14:27:49.229364   43353 out.go:177] * Starting "multinode-740856" primary control-plane node in "multinode-740856" cluster
	I1014 14:27:49.230603   43353 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 14:27:49.230640   43353 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1014 14:27:49.230650   43353 cache.go:56] Caching tarball of preloaded images
	I1014 14:27:49.230734   43353 preload.go:172] Found /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 14:27:49.230748   43353 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1014 14:27:49.230853   43353 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/multinode-740856/config.json ...
	I1014 14:27:49.231026   43353 start.go:360] acquireMachinesLock for multinode-740856: {Name:mk0b928e3a7c606d1470b2064477a22c5e59969d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 14:27:49.231064   43353 start.go:364] duration metric: took 21.342µs to acquireMachinesLock for "multinode-740856"
	I1014 14:27:49.231081   43353 start.go:96] Skipping create...Using existing machine configuration
	I1014 14:27:49.231090   43353 fix.go:54] fixHost starting: 
	I1014 14:27:49.231335   43353 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:27:49.231373   43353 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:27:49.245974   43353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37643
	I1014 14:27:49.246401   43353 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:27:49.246908   43353 main.go:141] libmachine: Using API Version  1
	I1014 14:27:49.246929   43353 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:27:49.247240   43353 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:27:49.247413   43353 main.go:141] libmachine: (multinode-740856) Calling .DriverName
	I1014 14:27:49.247532   43353 main.go:141] libmachine: (multinode-740856) Calling .GetState
	I1014 14:27:49.248894   43353 fix.go:112] recreateIfNeeded on multinode-740856: state=Running err=<nil>
	W1014 14:27:49.248909   43353 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 14:27:49.250532   43353 out.go:177] * Updating the running kvm2 "multinode-740856" VM ...
	I1014 14:27:49.251683   43353 machine.go:93] provisionDockerMachine start ...
	I1014 14:27:49.251701   43353 main.go:141] libmachine: (multinode-740856) Calling .DriverName
	I1014 14:27:49.251869   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHHostname
	I1014 14:27:49.254069   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:27:49.254462   43353 main.go:141] libmachine: (multinode-740856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cf:00", ip: ""} in network mk-multinode-740856: {Iface:virbr1 ExpiryTime:2024-10-14 15:22:28 +0000 UTC Type:0 Mac:52:54:00:75:cf:00 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-740856 Clientid:01:52:54:00:75:cf:00}
	I1014 14:27:49.254497   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined IP address 192.168.39.46 and MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:27:49.254573   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHPort
	I1014 14:27:49.254737   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHKeyPath
	I1014 14:27:49.254848   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHKeyPath
	I1014 14:27:49.254947   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHUsername
	I1014 14:27:49.255063   43353 main.go:141] libmachine: Using SSH client type: native
	I1014 14:27:49.255283   43353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.46 22 <nil> <nil>}
	I1014 14:27:49.255295   43353 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 14:27:49.371843   43353 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-740856
	
	I1014 14:27:49.371884   43353 main.go:141] libmachine: (multinode-740856) Calling .GetMachineName
	I1014 14:27:49.372143   43353 buildroot.go:166] provisioning hostname "multinode-740856"
	I1014 14:27:49.372170   43353 main.go:141] libmachine: (multinode-740856) Calling .GetMachineName
	I1014 14:27:49.372348   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHHostname
	I1014 14:27:49.375030   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:27:49.375396   43353 main.go:141] libmachine: (multinode-740856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cf:00", ip: ""} in network mk-multinode-740856: {Iface:virbr1 ExpiryTime:2024-10-14 15:22:28 +0000 UTC Type:0 Mac:52:54:00:75:cf:00 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-740856 Clientid:01:52:54:00:75:cf:00}
	I1014 14:27:49.375427   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined IP address 192.168.39.46 and MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:27:49.375504   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHPort
	I1014 14:27:49.375677   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHKeyPath
	I1014 14:27:49.375830   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHKeyPath
	I1014 14:27:49.375976   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHUsername
	I1014 14:27:49.376131   43353 main.go:141] libmachine: Using SSH client type: native
	I1014 14:27:49.376350   43353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.46 22 <nil> <nil>}
	I1014 14:27:49.376367   43353 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-740856 && echo "multinode-740856" | sudo tee /etc/hostname
	I1014 14:27:49.498749   43353 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-740856
	
	I1014 14:27:49.498785   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHHostname
	I1014 14:27:49.501700   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:27:49.502092   43353 main.go:141] libmachine: (multinode-740856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cf:00", ip: ""} in network mk-multinode-740856: {Iface:virbr1 ExpiryTime:2024-10-14 15:22:28 +0000 UTC Type:0 Mac:52:54:00:75:cf:00 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-740856 Clientid:01:52:54:00:75:cf:00}
	I1014 14:27:49.502118   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined IP address 192.168.39.46 and MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:27:49.502337   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHPort
	I1014 14:27:49.502511   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHKeyPath
	I1014 14:27:49.502671   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHKeyPath
	I1014 14:27:49.502817   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHUsername
	I1014 14:27:49.502973   43353 main.go:141] libmachine: Using SSH client type: native
	I1014 14:27:49.503133   43353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.46 22 <nil> <nil>}
	I1014 14:27:49.503149   43353 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-740856' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-740856/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-740856' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 14:27:49.612029   43353 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 14:27:49.612055   43353 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 14:27:49.612082   43353 buildroot.go:174] setting up certificates
	I1014 14:27:49.612090   43353 provision.go:84] configureAuth start
	I1014 14:27:49.612099   43353 main.go:141] libmachine: (multinode-740856) Calling .GetMachineName
	I1014 14:27:49.612328   43353 main.go:141] libmachine: (multinode-740856) Calling .GetIP
	I1014 14:27:49.615108   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:27:49.615511   43353 main.go:141] libmachine: (multinode-740856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cf:00", ip: ""} in network mk-multinode-740856: {Iface:virbr1 ExpiryTime:2024-10-14 15:22:28 +0000 UTC Type:0 Mac:52:54:00:75:cf:00 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-740856 Clientid:01:52:54:00:75:cf:00}
	I1014 14:27:49.615536   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined IP address 192.168.39.46 and MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:27:49.615721   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHHostname
	I1014 14:27:49.617783   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:27:49.618105   43353 main.go:141] libmachine: (multinode-740856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cf:00", ip: ""} in network mk-multinode-740856: {Iface:virbr1 ExpiryTime:2024-10-14 15:22:28 +0000 UTC Type:0 Mac:52:54:00:75:cf:00 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-740856 Clientid:01:52:54:00:75:cf:00}
	I1014 14:27:49.618131   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined IP address 192.168.39.46 and MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:27:49.618233   43353 provision.go:143] copyHostCerts
	I1014 14:27:49.618263   43353 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 14:27:49.618295   43353 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 14:27:49.618304   43353 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 14:27:49.618370   43353 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 14:27:49.618458   43353 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 14:27:49.618482   43353 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 14:27:49.618491   43353 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 14:27:49.618529   43353 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 14:27:49.618584   43353 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 14:27:49.618623   43353 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 14:27:49.618631   43353 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 14:27:49.618659   43353 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 14:27:49.618725   43353 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.multinode-740856 san=[127.0.0.1 192.168.39.46 localhost minikube multinode-740856]
	I1014 14:27:49.731653   43353 provision.go:177] copyRemoteCerts
	I1014 14:27:49.731705   43353 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 14:27:49.731726   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHHostname
	I1014 14:27:49.734442   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:27:49.734833   43353 main.go:141] libmachine: (multinode-740856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cf:00", ip: ""} in network mk-multinode-740856: {Iface:virbr1 ExpiryTime:2024-10-14 15:22:28 +0000 UTC Type:0 Mac:52:54:00:75:cf:00 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-740856 Clientid:01:52:54:00:75:cf:00}
	I1014 14:27:49.734869   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined IP address 192.168.39.46 and MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:27:49.735021   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHPort
	I1014 14:27:49.735190   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHKeyPath
	I1014 14:27:49.735320   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHUsername
	I1014 14:27:49.735469   43353 sshutil.go:53] new ssh client: &{IP:192.168.39.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/multinode-740856/id_rsa Username:docker}
	I1014 14:27:49.821856   43353 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1014 14:27:49.821918   43353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 14:27:49.854231   43353 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1014 14:27:49.854309   43353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1014 14:27:49.882173   43353 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1014 14:27:49.882234   43353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 14:27:49.910844   43353 provision.go:87] duration metric: took 298.740803ms to configureAuth
	I1014 14:27:49.910873   43353 buildroot.go:189] setting minikube options for container-runtime
	I1014 14:27:49.911142   43353 config.go:182] Loaded profile config "multinode-740856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 14:27:49.911221   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHHostname
	I1014 14:27:49.913605   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:27:49.913989   43353 main.go:141] libmachine: (multinode-740856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cf:00", ip: ""} in network mk-multinode-740856: {Iface:virbr1 ExpiryTime:2024-10-14 15:22:28 +0000 UTC Type:0 Mac:52:54:00:75:cf:00 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-740856 Clientid:01:52:54:00:75:cf:00}
	I1014 14:27:49.914014   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined IP address 192.168.39.46 and MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:27:49.914182   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHPort
	I1014 14:27:49.914342   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHKeyPath
	I1014 14:27:49.914485   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHKeyPath
	I1014 14:27:49.914618   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHUsername
	I1014 14:27:49.914759   43353 main.go:141] libmachine: Using SSH client type: native
	I1014 14:27:49.914913   43353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.46 22 <nil> <nil>}
	I1014 14:27:49.914926   43353 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 14:29:20.716609   43353 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 14:29:20.716638   43353 machine.go:96] duration metric: took 1m31.464940879s to provisionDockerMachine
	I1014 14:29:20.716652   43353 start.go:293] postStartSetup for "multinode-740856" (driver="kvm2")
	I1014 14:29:20.716667   43353 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 14:29:20.716687   43353 main.go:141] libmachine: (multinode-740856) Calling .DriverName
	I1014 14:29:20.716989   43353 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 14:29:20.717031   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHHostname
	I1014 14:29:20.720378   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:29:20.720864   43353 main.go:141] libmachine: (multinode-740856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cf:00", ip: ""} in network mk-multinode-740856: {Iface:virbr1 ExpiryTime:2024-10-14 15:22:28 +0000 UTC Type:0 Mac:52:54:00:75:cf:00 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-740856 Clientid:01:52:54:00:75:cf:00}
	I1014 14:29:20.720901   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined IP address 192.168.39.46 and MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:29:20.721060   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHPort
	I1014 14:29:20.721236   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHKeyPath
	I1014 14:29:20.721418   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHUsername
	I1014 14:29:20.721570   43353 sshutil.go:53] new ssh client: &{IP:192.168.39.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/multinode-740856/id_rsa Username:docker}
	I1014 14:29:20.807236   43353 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 14:29:20.812081   43353 command_runner.go:130] > NAME=Buildroot
	I1014 14:29:20.812102   43353 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1014 14:29:20.812106   43353 command_runner.go:130] > ID=buildroot
	I1014 14:29:20.812110   43353 command_runner.go:130] > VERSION_ID=2023.02.9
	I1014 14:29:20.812115   43353 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1014 14:29:20.812145   43353 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 14:29:20.812159   43353 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 14:29:20.812221   43353 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 14:29:20.812316   43353 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 14:29:20.812328   43353 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> /etc/ssl/certs/150232.pem
	I1014 14:29:20.812434   43353 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 14:29:20.821853   43353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 14:29:20.846581   43353 start.go:296] duration metric: took 129.893045ms for postStartSetup
	I1014 14:29:20.846638   43353 fix.go:56] duration metric: took 1m31.615546944s for fixHost
	I1014 14:29:20.846661   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHHostname
	I1014 14:29:20.849129   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:29:20.849517   43353 main.go:141] libmachine: (multinode-740856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cf:00", ip: ""} in network mk-multinode-740856: {Iface:virbr1 ExpiryTime:2024-10-14 15:22:28 +0000 UTC Type:0 Mac:52:54:00:75:cf:00 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-740856 Clientid:01:52:54:00:75:cf:00}
	I1014 14:29:20.849545   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined IP address 192.168.39.46 and MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:29:20.849722   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHPort
	I1014 14:29:20.849911   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHKeyPath
	I1014 14:29:20.850042   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHKeyPath
	I1014 14:29:20.850301   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHUsername
	I1014 14:29:20.850430   43353 main.go:141] libmachine: Using SSH client type: native
	I1014 14:29:20.850591   43353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.46 22 <nil> <nil>}
	I1014 14:29:20.850621   43353 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 14:29:20.955982   43353 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728916160.924797470
	
	I1014 14:29:20.956003   43353 fix.go:216] guest clock: 1728916160.924797470
	I1014 14:29:20.956009   43353 fix.go:229] Guest: 2024-10-14 14:29:20.92479747 +0000 UTC Remote: 2024-10-14 14:29:20.846643527 +0000 UTC m=+91.739532368 (delta=78.153943ms)
	I1014 14:29:20.956028   43353 fix.go:200] guest clock delta is within tolerance: 78.153943ms
	I1014 14:29:20.956034   43353 start.go:83] releasing machines lock for "multinode-740856", held for 1m31.724959548s
	I1014 14:29:20.956055   43353 main.go:141] libmachine: (multinode-740856) Calling .DriverName
	I1014 14:29:20.956354   43353 main.go:141] libmachine: (multinode-740856) Calling .GetIP
	I1014 14:29:20.958830   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:29:20.959128   43353 main.go:141] libmachine: (multinode-740856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cf:00", ip: ""} in network mk-multinode-740856: {Iface:virbr1 ExpiryTime:2024-10-14 15:22:28 +0000 UTC Type:0 Mac:52:54:00:75:cf:00 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-740856 Clientid:01:52:54:00:75:cf:00}
	I1014 14:29:20.959155   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined IP address 192.168.39.46 and MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:29:20.959254   43353 main.go:141] libmachine: (multinode-740856) Calling .DriverName
	I1014 14:29:20.959809   43353 main.go:141] libmachine: (multinode-740856) Calling .DriverName
	I1014 14:29:20.959970   43353 main.go:141] libmachine: (multinode-740856) Calling .DriverName
	I1014 14:29:20.960045   43353 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 14:29:20.960087   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHHostname
	I1014 14:29:20.960148   43353 ssh_runner.go:195] Run: cat /version.json
	I1014 14:29:20.960167   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHHostname
	I1014 14:29:20.962562   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:29:20.962915   43353 main.go:141] libmachine: (multinode-740856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cf:00", ip: ""} in network mk-multinode-740856: {Iface:virbr1 ExpiryTime:2024-10-14 15:22:28 +0000 UTC Type:0 Mac:52:54:00:75:cf:00 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-740856 Clientid:01:52:54:00:75:cf:00}
	I1014 14:29:20.962933   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined IP address 192.168.39.46 and MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:29:20.963003   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:29:20.963099   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHPort
	I1014 14:29:20.963242   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHKeyPath
	I1014 14:29:20.963383   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHUsername
	I1014 14:29:20.963469   43353 main.go:141] libmachine: (multinode-740856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cf:00", ip: ""} in network mk-multinode-740856: {Iface:virbr1 ExpiryTime:2024-10-14 15:22:28 +0000 UTC Type:0 Mac:52:54:00:75:cf:00 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-740856 Clientid:01:52:54:00:75:cf:00}
	I1014 14:29:20.963500   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined IP address 192.168.39.46 and MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:29:20.963520   43353 sshutil.go:53] new ssh client: &{IP:192.168.39.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/multinode-740856/id_rsa Username:docker}
	I1014 14:29:20.963693   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHPort
	I1014 14:29:20.963851   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHKeyPath
	I1014 14:29:20.963989   43353 main.go:141] libmachine: (multinode-740856) Calling .GetSSHUsername
	I1014 14:29:20.964130   43353 sshutil.go:53] new ssh client: &{IP:192.168.39.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/multinode-740856/id_rsa Username:docker}
	I1014 14:29:21.078743   43353 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1014 14:29:21.079368   43353 command_runner.go:130] > {"iso_version": "v1.34.0-1728382514-19774", "kicbase_version": "v0.0.45-1728063813-19756", "minikube_version": "v1.34.0", "commit": "cf9f11c2b0369efc07a929c4a1fdb2b4b3c62ee9"}
	I1014 14:29:21.079499   43353 ssh_runner.go:195] Run: systemctl --version
	I1014 14:29:21.085655   43353 command_runner.go:130] > systemd 252 (252)
	I1014 14:29:21.085684   43353 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1014 14:29:21.085869   43353 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 14:29:21.244883   43353 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1014 14:29:21.251035   43353 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1014 14:29:21.251178   43353 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 14:29:21.251258   43353 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 14:29:21.260485   43353 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 14:29:21.260500   43353 start.go:495] detecting cgroup driver to use...
	I1014 14:29:21.260552   43353 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 14:29:21.277330   43353 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 14:29:21.293794   43353 docker.go:217] disabling cri-docker service (if available) ...
	I1014 14:29:21.293887   43353 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 14:29:21.310117   43353 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 14:29:21.324402   43353 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 14:29:21.473064   43353 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 14:29:21.618736   43353 docker.go:233] disabling docker service ...
	I1014 14:29:21.618804   43353 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 14:29:21.636363   43353 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 14:29:21.650091   43353 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 14:29:21.795372   43353 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 14:29:21.938740   43353 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 14:29:21.953125   43353 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 14:29:21.972592   43353 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1014 14:29:21.973131   43353 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 14:29:21.973202   43353 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:29:21.983944   43353 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 14:29:21.983999   43353 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:29:21.994609   43353 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:29:22.005159   43353 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:29:22.015615   43353 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 14:29:22.027062   43353 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:29:22.045771   43353 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:29:22.057415   43353 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:29:22.069190   43353 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 14:29:22.080396   43353 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1014 14:29:22.080588   43353 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 14:29:22.089980   43353 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 14:29:22.223198   43353 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 14:29:24.045521   43353 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.822292246s)
	I1014 14:29:24.045548   43353 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 14:29:24.045609   43353 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 14:29:24.053107   43353 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1014 14:29:24.053136   43353 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1014 14:29:24.053158   43353 command_runner.go:130] > Device: 0,22	Inode: 1286        Links: 1
	I1014 14:29:24.053169   43353 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1014 14:29:24.053177   43353 command_runner.go:130] > Access: 2024-10-14 14:29:23.922934965 +0000
	I1014 14:29:24.053186   43353 command_runner.go:130] > Modify: 2024-10-14 14:29:23.902934433 +0000
	I1014 14:29:24.053195   43353 command_runner.go:130] > Change: 2024-10-14 14:29:23.902934433 +0000
	I1014 14:29:24.053200   43353 command_runner.go:130] >  Birth: -
	I1014 14:29:24.053222   43353 start.go:563] Will wait 60s for crictl version
	I1014 14:29:24.053273   43353 ssh_runner.go:195] Run: which crictl
	I1014 14:29:24.058410   43353 command_runner.go:130] > /usr/bin/crictl
	I1014 14:29:24.058530   43353 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 14:29:24.093091   43353 command_runner.go:130] > Version:  0.1.0
	I1014 14:29:24.093118   43353 command_runner.go:130] > RuntimeName:  cri-o
	I1014 14:29:24.093126   43353 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1014 14:29:24.093134   43353 command_runner.go:130] > RuntimeApiVersion:  v1
	I1014 14:29:24.093156   43353 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 14:29:24.093220   43353 ssh_runner.go:195] Run: crio --version
	I1014 14:29:24.119856   43353 command_runner.go:130] > crio version 1.29.1
	I1014 14:29:24.119881   43353 command_runner.go:130] > Version:        1.29.1
	I1014 14:29:24.119891   43353 command_runner.go:130] > GitCommit:      unknown
	I1014 14:29:24.119898   43353 command_runner.go:130] > GitCommitDate:  unknown
	I1014 14:29:24.119905   43353 command_runner.go:130] > GitTreeState:   clean
	I1014 14:29:24.119913   43353 command_runner.go:130] > BuildDate:      2024-10-08T15:57:16Z
	I1014 14:29:24.119920   43353 command_runner.go:130] > GoVersion:      go1.21.6
	I1014 14:29:24.119927   43353 command_runner.go:130] > Compiler:       gc
	I1014 14:29:24.119934   43353 command_runner.go:130] > Platform:       linux/amd64
	I1014 14:29:24.119941   43353 command_runner.go:130] > Linkmode:       dynamic
	I1014 14:29:24.119951   43353 command_runner.go:130] > BuildTags:      
	I1014 14:29:24.119963   43353 command_runner.go:130] >   containers_image_ostree_stub
	I1014 14:29:24.119971   43353 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1014 14:29:24.119977   43353 command_runner.go:130] >   btrfs_noversion
	I1014 14:29:24.119988   43353 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1014 14:29:24.119995   43353 command_runner.go:130] >   libdm_no_deferred_remove
	I1014 14:29:24.120003   43353 command_runner.go:130] >   seccomp
	I1014 14:29:24.120010   43353 command_runner.go:130] > LDFlags:          unknown
	I1014 14:29:24.120020   43353 command_runner.go:130] > SeccompEnabled:   true
	I1014 14:29:24.120027   43353 command_runner.go:130] > AppArmorEnabled:  false
	I1014 14:29:24.121163   43353 ssh_runner.go:195] Run: crio --version
	I1014 14:29:24.148523   43353 command_runner.go:130] > crio version 1.29.1
	I1014 14:29:24.148541   43353 command_runner.go:130] > Version:        1.29.1
	I1014 14:29:24.148561   43353 command_runner.go:130] > GitCommit:      unknown
	I1014 14:29:24.148568   43353 command_runner.go:130] > GitCommitDate:  unknown
	I1014 14:29:24.148590   43353 command_runner.go:130] > GitTreeState:   clean
	I1014 14:29:24.148598   43353 command_runner.go:130] > BuildDate:      2024-10-08T15:57:16Z
	I1014 14:29:24.148602   43353 command_runner.go:130] > GoVersion:      go1.21.6
	I1014 14:29:24.148606   43353 command_runner.go:130] > Compiler:       gc
	I1014 14:29:24.148613   43353 command_runner.go:130] > Platform:       linux/amd64
	I1014 14:29:24.148617   43353 command_runner.go:130] > Linkmode:       dynamic
	I1014 14:29:24.148622   43353 command_runner.go:130] > BuildTags:      
	I1014 14:29:24.148627   43353 command_runner.go:130] >   containers_image_ostree_stub
	I1014 14:29:24.148631   43353 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1014 14:29:24.148637   43353 command_runner.go:130] >   btrfs_noversion
	I1014 14:29:24.148641   43353 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1014 14:29:24.148645   43353 command_runner.go:130] >   libdm_no_deferred_remove
	I1014 14:29:24.148651   43353 command_runner.go:130] >   seccomp
	I1014 14:29:24.148657   43353 command_runner.go:130] > LDFlags:          unknown
	I1014 14:29:24.148667   43353 command_runner.go:130] > SeccompEnabled:   true
	I1014 14:29:24.148674   43353 command_runner.go:130] > AppArmorEnabled:  false
	I1014 14:29:24.151516   43353 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1014 14:29:24.152680   43353 main.go:141] libmachine: (multinode-740856) Calling .GetIP
	I1014 14:29:24.155170   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:29:24.155536   43353 main.go:141] libmachine: (multinode-740856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cf:00", ip: ""} in network mk-multinode-740856: {Iface:virbr1 ExpiryTime:2024-10-14 15:22:28 +0000 UTC Type:0 Mac:52:54:00:75:cf:00 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-740856 Clientid:01:52:54:00:75:cf:00}
	I1014 14:29:24.155563   43353 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined IP address 192.168.39.46 and MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:29:24.155808   43353 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1014 14:29:24.160032   43353 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1014 14:29:24.160283   43353 kubeadm.go:883] updating cluster {Name:multinode-740856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.1 ClusterName:multinode-740856 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.46 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.81 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.82 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress
-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 14:29:24.160420   43353 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 14:29:24.160460   43353 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 14:29:24.204853   43353 command_runner.go:130] > {
	I1014 14:29:24.204873   43353 command_runner.go:130] >   "images": [
	I1014 14:29:24.204877   43353 command_runner.go:130] >     {
	I1014 14:29:24.204885   43353 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1014 14:29:24.204890   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.204906   43353 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1014 14:29:24.204911   43353 command_runner.go:130] >       ],
	I1014 14:29:24.204918   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.204936   43353 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1014 14:29:24.204953   43353 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1014 14:29:24.204959   43353 command_runner.go:130] >       ],
	I1014 14:29:24.204964   43353 command_runner.go:130] >       "size": "87190579",
	I1014 14:29:24.204968   43353 command_runner.go:130] >       "uid": null,
	I1014 14:29:24.204972   43353 command_runner.go:130] >       "username": "",
	I1014 14:29:24.204979   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.204984   43353 command_runner.go:130] >       "pinned": false
	I1014 14:29:24.204987   43353 command_runner.go:130] >     },
	I1014 14:29:24.204991   43353 command_runner.go:130] >     {
	I1014 14:29:24.204997   43353 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1014 14:29:24.205001   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.205009   43353 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1014 14:29:24.205013   43353 command_runner.go:130] >       ],
	I1014 14:29:24.205019   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.205033   43353 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1014 14:29:24.205047   43353 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1014 14:29:24.205068   43353 command_runner.go:130] >       ],
	I1014 14:29:24.205077   43353 command_runner.go:130] >       "size": "94965812",
	I1014 14:29:24.205082   43353 command_runner.go:130] >       "uid": null,
	I1014 14:29:24.205090   43353 command_runner.go:130] >       "username": "",
	I1014 14:29:24.205097   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.205100   43353 command_runner.go:130] >       "pinned": false
	I1014 14:29:24.205103   43353 command_runner.go:130] >     },
	I1014 14:29:24.205107   43353 command_runner.go:130] >     {
	I1014 14:29:24.205112   43353 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1014 14:29:24.205119   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.205123   43353 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1014 14:29:24.205129   43353 command_runner.go:130] >       ],
	I1014 14:29:24.205138   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.205160   43353 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1014 14:29:24.205174   43353 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1014 14:29:24.205183   43353 command_runner.go:130] >       ],
	I1014 14:29:24.205190   43353 command_runner.go:130] >       "size": "1363676",
	I1014 14:29:24.205194   43353 command_runner.go:130] >       "uid": null,
	I1014 14:29:24.205201   43353 command_runner.go:130] >       "username": "",
	I1014 14:29:24.205205   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.205209   43353 command_runner.go:130] >       "pinned": false
	I1014 14:29:24.205214   43353 command_runner.go:130] >     },
	I1014 14:29:24.205217   43353 command_runner.go:130] >     {
	I1014 14:29:24.205223   43353 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1014 14:29:24.205232   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.205243   43353 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1014 14:29:24.205251   43353 command_runner.go:130] >       ],
	I1014 14:29:24.205259   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.205276   43353 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1014 14:29:24.205298   43353 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1014 14:29:24.205305   43353 command_runner.go:130] >       ],
	I1014 14:29:24.205314   43353 command_runner.go:130] >       "size": "31470524",
	I1014 14:29:24.205323   43353 command_runner.go:130] >       "uid": null,
	I1014 14:29:24.205333   43353 command_runner.go:130] >       "username": "",
	I1014 14:29:24.205340   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.205349   43353 command_runner.go:130] >       "pinned": false
	I1014 14:29:24.205357   43353 command_runner.go:130] >     },
	I1014 14:29:24.205365   43353 command_runner.go:130] >     {
	I1014 14:29:24.205378   43353 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1014 14:29:24.205387   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.205398   43353 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1014 14:29:24.205404   43353 command_runner.go:130] >       ],
	I1014 14:29:24.205408   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.205421   43353 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1014 14:29:24.205435   43353 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1014 14:29:24.205444   43353 command_runner.go:130] >       ],
	I1014 14:29:24.205457   43353 command_runner.go:130] >       "size": "63273227",
	I1014 14:29:24.205466   43353 command_runner.go:130] >       "uid": null,
	I1014 14:29:24.205476   43353 command_runner.go:130] >       "username": "nonroot",
	I1014 14:29:24.205484   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.205493   43353 command_runner.go:130] >       "pinned": false
	I1014 14:29:24.205500   43353 command_runner.go:130] >     },
	I1014 14:29:24.205503   43353 command_runner.go:130] >     {
	I1014 14:29:24.205514   43353 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1014 14:29:24.205524   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.205534   43353 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1014 14:29:24.205543   43353 command_runner.go:130] >       ],
	I1014 14:29:24.205550   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.205563   43353 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1014 14:29:24.205577   43353 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1014 14:29:24.205583   43353 command_runner.go:130] >       ],
	I1014 14:29:24.205588   43353 command_runner.go:130] >       "size": "149009664",
	I1014 14:29:24.205594   43353 command_runner.go:130] >       "uid": {
	I1014 14:29:24.205601   43353 command_runner.go:130] >         "value": "0"
	I1014 14:29:24.205608   43353 command_runner.go:130] >       },
	I1014 14:29:24.205615   43353 command_runner.go:130] >       "username": "",
	I1014 14:29:24.205625   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.205634   43353 command_runner.go:130] >       "pinned": false
	I1014 14:29:24.205642   43353 command_runner.go:130] >     },
	I1014 14:29:24.205651   43353 command_runner.go:130] >     {
	I1014 14:29:24.205662   43353 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1014 14:29:24.205671   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.205680   43353 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1014 14:29:24.205686   43353 command_runner.go:130] >       ],
	I1014 14:29:24.205691   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.205705   43353 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1014 14:29:24.205720   43353 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1014 14:29:24.205728   43353 command_runner.go:130] >       ],
	I1014 14:29:24.205738   43353 command_runner.go:130] >       "size": "95237600",
	I1014 14:29:24.205752   43353 command_runner.go:130] >       "uid": {
	I1014 14:29:24.205762   43353 command_runner.go:130] >         "value": "0"
	I1014 14:29:24.205769   43353 command_runner.go:130] >       },
	I1014 14:29:24.205778   43353 command_runner.go:130] >       "username": "",
	I1014 14:29:24.205785   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.205789   43353 command_runner.go:130] >       "pinned": false
	I1014 14:29:24.205794   43353 command_runner.go:130] >     },
	I1014 14:29:24.205802   43353 command_runner.go:130] >     {
	I1014 14:29:24.205815   43353 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1014 14:29:24.205824   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.205836   43353 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1014 14:29:24.205844   43353 command_runner.go:130] >       ],
	I1014 14:29:24.205853   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.205880   43353 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1014 14:29:24.205895   43353 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1014 14:29:24.205901   43353 command_runner.go:130] >       ],
	I1014 14:29:24.205908   43353 command_runner.go:130] >       "size": "89437508",
	I1014 14:29:24.205913   43353 command_runner.go:130] >       "uid": {
	I1014 14:29:24.205920   43353 command_runner.go:130] >         "value": "0"
	I1014 14:29:24.205927   43353 command_runner.go:130] >       },
	I1014 14:29:24.205935   43353 command_runner.go:130] >       "username": "",
	I1014 14:29:24.205942   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.205948   43353 command_runner.go:130] >       "pinned": false
	I1014 14:29:24.205953   43353 command_runner.go:130] >     },
	I1014 14:29:24.205958   43353 command_runner.go:130] >     {
	I1014 14:29:24.205968   43353 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1014 14:29:24.205974   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.205982   43353 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1014 14:29:24.205985   43353 command_runner.go:130] >       ],
	I1014 14:29:24.205991   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.206005   43353 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1014 14:29:24.206019   43353 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1014 14:29:24.206028   43353 command_runner.go:130] >       ],
	I1014 14:29:24.206043   43353 command_runner.go:130] >       "size": "92733849",
	I1014 14:29:24.206052   43353 command_runner.go:130] >       "uid": null,
	I1014 14:29:24.206065   43353 command_runner.go:130] >       "username": "",
	I1014 14:29:24.206071   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.206080   43353 command_runner.go:130] >       "pinned": false
	I1014 14:29:24.206084   43353 command_runner.go:130] >     },
	I1014 14:29:24.206090   43353 command_runner.go:130] >     {
	I1014 14:29:24.206098   43353 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1014 14:29:24.206108   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.206116   43353 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1014 14:29:24.206124   43353 command_runner.go:130] >       ],
	I1014 14:29:24.206131   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.206144   43353 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1014 14:29:24.206158   43353 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1014 14:29:24.206167   43353 command_runner.go:130] >       ],
	I1014 14:29:24.206173   43353 command_runner.go:130] >       "size": "68420934",
	I1014 14:29:24.206179   43353 command_runner.go:130] >       "uid": {
	I1014 14:29:24.206183   43353 command_runner.go:130] >         "value": "0"
	I1014 14:29:24.206188   43353 command_runner.go:130] >       },
	I1014 14:29:24.206196   43353 command_runner.go:130] >       "username": "",
	I1014 14:29:24.206202   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.206211   43353 command_runner.go:130] >       "pinned": false
	I1014 14:29:24.206216   43353 command_runner.go:130] >     },
	I1014 14:29:24.206224   43353 command_runner.go:130] >     {
	I1014 14:29:24.206242   43353 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1014 14:29:24.206251   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.206259   43353 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1014 14:29:24.206267   43353 command_runner.go:130] >       ],
	I1014 14:29:24.206276   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.206290   43353 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1014 14:29:24.206304   43353 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1014 14:29:24.206314   43353 command_runner.go:130] >       ],
	I1014 14:29:24.206323   43353 command_runner.go:130] >       "size": "742080",
	I1014 14:29:24.206337   43353 command_runner.go:130] >       "uid": {
	I1014 14:29:24.206347   43353 command_runner.go:130] >         "value": "65535"
	I1014 14:29:24.206354   43353 command_runner.go:130] >       },
	I1014 14:29:24.206358   43353 command_runner.go:130] >       "username": "",
	I1014 14:29:24.206365   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.206371   43353 command_runner.go:130] >       "pinned": true
	I1014 14:29:24.206380   43353 command_runner.go:130] >     }
	I1014 14:29:24.206387   43353 command_runner.go:130] >   ]
	I1014 14:29:24.206396   43353 command_runner.go:130] > }
	I1014 14:29:24.206625   43353 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 14:29:24.206642   43353 crio.go:433] Images already preloaded, skipping extraction
	I1014 14:29:24.206695   43353 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 14:29:24.245145   43353 command_runner.go:130] > {
	I1014 14:29:24.245169   43353 command_runner.go:130] >   "images": [
	I1014 14:29:24.245174   43353 command_runner.go:130] >     {
	I1014 14:29:24.245186   43353 command_runner.go:130] >       "id": "12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f",
	I1014 14:29:24.245192   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.245201   43353 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240813-c6f155d6"
	I1014 14:29:24.245206   43353 command_runner.go:130] >       ],
	I1014 14:29:24.245213   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.245226   43353 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b",
	I1014 14:29:24.245240   43353 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"
	I1014 14:29:24.245246   43353 command_runner.go:130] >       ],
	I1014 14:29:24.245256   43353 command_runner.go:130] >       "size": "87190579",
	I1014 14:29:24.245262   43353 command_runner.go:130] >       "uid": null,
	I1014 14:29:24.245268   43353 command_runner.go:130] >       "username": "",
	I1014 14:29:24.245275   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.245281   43353 command_runner.go:130] >       "pinned": false
	I1014 14:29:24.245289   43353 command_runner.go:130] >     },
	I1014 14:29:24.245294   43353 command_runner.go:130] >     {
	I1014 14:29:24.245306   43353 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1014 14:29:24.245312   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.245322   43353 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1014 14:29:24.245341   43353 command_runner.go:130] >       ],
	I1014 14:29:24.245350   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.245361   43353 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1014 14:29:24.245375   43353 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1014 14:29:24.245383   43353 command_runner.go:130] >       ],
	I1014 14:29:24.245400   43353 command_runner.go:130] >       "size": "94965812",
	I1014 14:29:24.245409   43353 command_runner.go:130] >       "uid": null,
	I1014 14:29:24.245421   43353 command_runner.go:130] >       "username": "",
	I1014 14:29:24.245427   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.245433   43353 command_runner.go:130] >       "pinned": false
	I1014 14:29:24.245441   43353 command_runner.go:130] >     },
	I1014 14:29:24.245446   43353 command_runner.go:130] >     {
	I1014 14:29:24.245458   43353 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1014 14:29:24.245466   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.245476   43353 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1014 14:29:24.245484   43353 command_runner.go:130] >       ],
	I1014 14:29:24.245494   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.245509   43353 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1014 14:29:24.245523   43353 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1014 14:29:24.245531   43353 command_runner.go:130] >       ],
	I1014 14:29:24.245538   43353 command_runner.go:130] >       "size": "1363676",
	I1014 14:29:24.245544   43353 command_runner.go:130] >       "uid": null,
	I1014 14:29:24.245548   43353 command_runner.go:130] >       "username": "",
	I1014 14:29:24.245554   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.245558   43353 command_runner.go:130] >       "pinned": false
	I1014 14:29:24.245564   43353 command_runner.go:130] >     },
	I1014 14:29:24.245567   43353 command_runner.go:130] >     {
	I1014 14:29:24.245573   43353 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1014 14:29:24.245579   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.245584   43353 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1014 14:29:24.245589   43353 command_runner.go:130] >       ],
	I1014 14:29:24.245593   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.245602   43353 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1014 14:29:24.245624   43353 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1014 14:29:24.245630   43353 command_runner.go:130] >       ],
	I1014 14:29:24.245635   43353 command_runner.go:130] >       "size": "31470524",
	I1014 14:29:24.245639   43353 command_runner.go:130] >       "uid": null,
	I1014 14:29:24.245645   43353 command_runner.go:130] >       "username": "",
	I1014 14:29:24.245649   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.245655   43353 command_runner.go:130] >       "pinned": false
	I1014 14:29:24.245659   43353 command_runner.go:130] >     },
	I1014 14:29:24.245664   43353 command_runner.go:130] >     {
	I1014 14:29:24.245670   43353 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1014 14:29:24.245676   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.245681   43353 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1014 14:29:24.245687   43353 command_runner.go:130] >       ],
	I1014 14:29:24.245691   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.245701   43353 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1014 14:29:24.245714   43353 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1014 14:29:24.245721   43353 command_runner.go:130] >       ],
	I1014 14:29:24.245728   43353 command_runner.go:130] >       "size": "63273227",
	I1014 14:29:24.245734   43353 command_runner.go:130] >       "uid": null,
	I1014 14:29:24.245743   43353 command_runner.go:130] >       "username": "nonroot",
	I1014 14:29:24.245749   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.245758   43353 command_runner.go:130] >       "pinned": false
	I1014 14:29:24.245766   43353 command_runner.go:130] >     },
	I1014 14:29:24.245775   43353 command_runner.go:130] >     {
	I1014 14:29:24.245785   43353 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1014 14:29:24.245794   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.245801   43353 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1014 14:29:24.245809   43353 command_runner.go:130] >       ],
	I1014 14:29:24.245815   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.245822   43353 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1014 14:29:24.245831   43353 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1014 14:29:24.245837   43353 command_runner.go:130] >       ],
	I1014 14:29:24.245840   43353 command_runner.go:130] >       "size": "149009664",
	I1014 14:29:24.245852   43353 command_runner.go:130] >       "uid": {
	I1014 14:29:24.245859   43353 command_runner.go:130] >         "value": "0"
	I1014 14:29:24.245863   43353 command_runner.go:130] >       },
	I1014 14:29:24.245869   43353 command_runner.go:130] >       "username": "",
	I1014 14:29:24.245873   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.245879   43353 command_runner.go:130] >       "pinned": false
	I1014 14:29:24.245883   43353 command_runner.go:130] >     },
	I1014 14:29:24.245888   43353 command_runner.go:130] >     {
	I1014 14:29:24.245894   43353 command_runner.go:130] >       "id": "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee",
	I1014 14:29:24.245900   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.245905   43353 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.1"
	I1014 14:29:24.245910   43353 command_runner.go:130] >       ],
	I1014 14:29:24.245915   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.245924   43353 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771",
	I1014 14:29:24.245933   43353 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"
	I1014 14:29:24.245938   43353 command_runner.go:130] >       ],
	I1014 14:29:24.245942   43353 command_runner.go:130] >       "size": "95237600",
	I1014 14:29:24.245948   43353 command_runner.go:130] >       "uid": {
	I1014 14:29:24.245952   43353 command_runner.go:130] >         "value": "0"
	I1014 14:29:24.245957   43353 command_runner.go:130] >       },
	I1014 14:29:24.245961   43353 command_runner.go:130] >       "username": "",
	I1014 14:29:24.245967   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.245970   43353 command_runner.go:130] >       "pinned": false
	I1014 14:29:24.245976   43353 command_runner.go:130] >     },
	I1014 14:29:24.245979   43353 command_runner.go:130] >     {
	I1014 14:29:24.245986   43353 command_runner.go:130] >       "id": "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1",
	I1014 14:29:24.245992   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.245998   43353 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.1"
	I1014 14:29:24.246003   43353 command_runner.go:130] >       ],
	I1014 14:29:24.246007   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.246029   43353 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1",
	I1014 14:29:24.246039   43353 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"
	I1014 14:29:24.246044   43353 command_runner.go:130] >       ],
	I1014 14:29:24.246053   43353 command_runner.go:130] >       "size": "89437508",
	I1014 14:29:24.246059   43353 command_runner.go:130] >       "uid": {
	I1014 14:29:24.246063   43353 command_runner.go:130] >         "value": "0"
	I1014 14:29:24.246068   43353 command_runner.go:130] >       },
	I1014 14:29:24.246072   43353 command_runner.go:130] >       "username": "",
	I1014 14:29:24.246078   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.246082   43353 command_runner.go:130] >       "pinned": false
	I1014 14:29:24.246088   43353 command_runner.go:130] >     },
	I1014 14:29:24.246091   43353 command_runner.go:130] >     {
	I1014 14:29:24.246097   43353 command_runner.go:130] >       "id": "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561",
	I1014 14:29:24.246103   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.246107   43353 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.1"
	I1014 14:29:24.246111   43353 command_runner.go:130] >       ],
	I1014 14:29:24.246115   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.246124   43353 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44",
	I1014 14:29:24.246131   43353 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"
	I1014 14:29:24.246136   43353 command_runner.go:130] >       ],
	I1014 14:29:24.246141   43353 command_runner.go:130] >       "size": "92733849",
	I1014 14:29:24.246147   43353 command_runner.go:130] >       "uid": null,
	I1014 14:29:24.246151   43353 command_runner.go:130] >       "username": "",
	I1014 14:29:24.246157   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.246161   43353 command_runner.go:130] >       "pinned": false
	I1014 14:29:24.246166   43353 command_runner.go:130] >     },
	I1014 14:29:24.246169   43353 command_runner.go:130] >     {
	I1014 14:29:24.246177   43353 command_runner.go:130] >       "id": "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b",
	I1014 14:29:24.246183   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.246188   43353 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.1"
	I1014 14:29:24.246193   43353 command_runner.go:130] >       ],
	I1014 14:29:24.246197   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.246206   43353 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0",
	I1014 14:29:24.246213   43353 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"
	I1014 14:29:24.246219   43353 command_runner.go:130] >       ],
	I1014 14:29:24.246223   43353 command_runner.go:130] >       "size": "68420934",
	I1014 14:29:24.246233   43353 command_runner.go:130] >       "uid": {
	I1014 14:29:24.246239   43353 command_runner.go:130] >         "value": "0"
	I1014 14:29:24.246243   43353 command_runner.go:130] >       },
	I1014 14:29:24.246249   43353 command_runner.go:130] >       "username": "",
	I1014 14:29:24.246252   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.246258   43353 command_runner.go:130] >       "pinned": false
	I1014 14:29:24.246262   43353 command_runner.go:130] >     },
	I1014 14:29:24.246267   43353 command_runner.go:130] >     {
	I1014 14:29:24.246273   43353 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1014 14:29:24.246279   43353 command_runner.go:130] >       "repoTags": [
	I1014 14:29:24.246283   43353 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1014 14:29:24.246288   43353 command_runner.go:130] >       ],
	I1014 14:29:24.246292   43353 command_runner.go:130] >       "repoDigests": [
	I1014 14:29:24.246298   43353 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1014 14:29:24.246307   43353 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1014 14:29:24.246313   43353 command_runner.go:130] >       ],
	I1014 14:29:24.246316   43353 command_runner.go:130] >       "size": "742080",
	I1014 14:29:24.246323   43353 command_runner.go:130] >       "uid": {
	I1014 14:29:24.246327   43353 command_runner.go:130] >         "value": "65535"
	I1014 14:29:24.246332   43353 command_runner.go:130] >       },
	I1014 14:29:24.246336   43353 command_runner.go:130] >       "username": "",
	I1014 14:29:24.246341   43353 command_runner.go:130] >       "spec": null,
	I1014 14:29:24.246345   43353 command_runner.go:130] >       "pinned": true
	I1014 14:29:24.246350   43353 command_runner.go:130] >     }
	I1014 14:29:24.246353   43353 command_runner.go:130] >   ]
	I1014 14:29:24.246359   43353 command_runner.go:130] > }
	I1014 14:29:24.246475   43353 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 14:29:24.246486   43353 cache_images.go:84] Images are preloaded, skipping loading
	I1014 14:29:24.246492   43353 kubeadm.go:934] updating node { 192.168.39.46 8443 v1.31.1 crio true true} ...
	I1014 14:29:24.246587   43353 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-740856 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.46
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:multinode-740856 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 14:29:24.246670   43353 ssh_runner.go:195] Run: crio config
	I1014 14:29:24.284495   43353 command_runner.go:130] ! time="2024-10-14 14:29:24.253352957Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1014 14:29:24.289866   43353 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1014 14:29:24.295168   43353 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1014 14:29:24.295188   43353 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1014 14:29:24.295197   43353 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1014 14:29:24.295201   43353 command_runner.go:130] > #
	I1014 14:29:24.295212   43353 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1014 14:29:24.295221   43353 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1014 14:29:24.295229   43353 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1014 14:29:24.295242   43353 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1014 14:29:24.295248   43353 command_runner.go:130] > # reload'.
	I1014 14:29:24.295261   43353 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1014 14:29:24.295271   43353 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1014 14:29:24.295284   43353 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1014 14:29:24.295293   43353 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1014 14:29:24.295301   43353 command_runner.go:130] > [crio]
	I1014 14:29:24.295314   43353 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1014 14:29:24.295325   43353 command_runner.go:130] > # containers images, in this directory.
	I1014 14:29:24.295335   43353 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1014 14:29:24.295352   43353 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1014 14:29:24.295359   43353 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1014 14:29:24.295367   43353 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1014 14:29:24.295373   43353 command_runner.go:130] > # imagestore = ""
	I1014 14:29:24.295379   43353 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1014 14:29:24.295392   43353 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1014 14:29:24.295399   43353 command_runner.go:130] > storage_driver = "overlay"
	I1014 14:29:24.295404   43353 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1014 14:29:24.295418   43353 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1014 14:29:24.295422   43353 command_runner.go:130] > storage_option = [
	I1014 14:29:24.295429   43353 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1014 14:29:24.295432   43353 command_runner.go:130] > ]
	I1014 14:29:24.295440   43353 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1014 14:29:24.295448   43353 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1014 14:29:24.295454   43353 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1014 14:29:24.295460   43353 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1014 14:29:24.295468   43353 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1014 14:29:24.295475   43353 command_runner.go:130] > # always happen on a node reboot
	I1014 14:29:24.295479   43353 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1014 14:29:24.295492   43353 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1014 14:29:24.295500   43353 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1014 14:29:24.295505   43353 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1014 14:29:24.295512   43353 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1014 14:29:24.295519   43353 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1014 14:29:24.295529   43353 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1014 14:29:24.295541   43353 command_runner.go:130] > # internal_wipe = true
	I1014 14:29:24.295551   43353 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1014 14:29:24.295558   43353 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1014 14:29:24.295562   43353 command_runner.go:130] > # internal_repair = false
	I1014 14:29:24.295570   43353 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1014 14:29:24.295575   43353 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1014 14:29:24.295583   43353 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1014 14:29:24.295588   43353 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1014 14:29:24.295595   43353 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1014 14:29:24.295599   43353 command_runner.go:130] > [crio.api]
	I1014 14:29:24.295605   43353 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1014 14:29:24.295611   43353 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1014 14:29:24.295617   43353 command_runner.go:130] > # IP address on which the stream server will listen.
	I1014 14:29:24.295623   43353 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1014 14:29:24.295629   43353 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1014 14:29:24.295636   43353 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1014 14:29:24.295644   43353 command_runner.go:130] > # stream_port = "0"
	I1014 14:29:24.295651   43353 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1014 14:29:24.295655   43353 command_runner.go:130] > # stream_enable_tls = false
	I1014 14:29:24.295663   43353 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1014 14:29:24.295667   43353 command_runner.go:130] > # stream_idle_timeout = ""
	I1014 14:29:24.295675   43353 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1014 14:29:24.295683   43353 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1014 14:29:24.295689   43353 command_runner.go:130] > # minutes.
	I1014 14:29:24.295693   43353 command_runner.go:130] > # stream_tls_cert = ""
	I1014 14:29:24.295701   43353 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1014 14:29:24.295709   43353 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1014 14:29:24.295718   43353 command_runner.go:130] > # stream_tls_key = ""
	I1014 14:29:24.295727   43353 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1014 14:29:24.295739   43353 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1014 14:29:24.295766   43353 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1014 14:29:24.295776   43353 command_runner.go:130] > # stream_tls_ca = ""
	I1014 14:29:24.295786   43353 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1014 14:29:24.295793   43353 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1014 14:29:24.295807   43353 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1014 14:29:24.295816   43353 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1014 14:29:24.295826   43353 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1014 14:29:24.295837   43353 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1014 14:29:24.295841   43353 command_runner.go:130] > [crio.runtime]
	I1014 14:29:24.295848   43353 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1014 14:29:24.295854   43353 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1014 14:29:24.295860   43353 command_runner.go:130] > # "nofile=1024:2048"
	I1014 14:29:24.295866   43353 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1014 14:29:24.295872   43353 command_runner.go:130] > # default_ulimits = [
	I1014 14:29:24.295875   43353 command_runner.go:130] > # ]
	I1014 14:29:24.295881   43353 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1014 14:29:24.295887   43353 command_runner.go:130] > # no_pivot = false
	I1014 14:29:24.295893   43353 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1014 14:29:24.295899   43353 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1014 14:29:24.295910   43353 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1014 14:29:24.295918   43353 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1014 14:29:24.295922   43353 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1014 14:29:24.295929   43353 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1014 14:29:24.295935   43353 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1014 14:29:24.295939   43353 command_runner.go:130] > # Cgroup setting for conmon
	I1014 14:29:24.295951   43353 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1014 14:29:24.295957   43353 command_runner.go:130] > conmon_cgroup = "pod"
	I1014 14:29:24.295963   43353 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1014 14:29:24.295970   43353 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1014 14:29:24.295976   43353 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1014 14:29:24.295982   43353 command_runner.go:130] > conmon_env = [
	I1014 14:29:24.295988   43353 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1014 14:29:24.295994   43353 command_runner.go:130] > ]
	I1014 14:29:24.295999   43353 command_runner.go:130] > # Additional environment variables to set for all the
	I1014 14:29:24.296006   43353 command_runner.go:130] > # containers. These are overridden if set in the
	I1014 14:29:24.296011   43353 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1014 14:29:24.296017   43353 command_runner.go:130] > # default_env = [
	I1014 14:29:24.296021   43353 command_runner.go:130] > # ]
	I1014 14:29:24.296027   43353 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1014 14:29:24.296036   43353 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1014 14:29:24.296042   43353 command_runner.go:130] > # selinux = false
	I1014 14:29:24.296048   43353 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1014 14:29:24.296055   43353 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1014 14:29:24.296063   43353 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1014 14:29:24.296067   43353 command_runner.go:130] > # seccomp_profile = ""
	I1014 14:29:24.296074   43353 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1014 14:29:24.296080   43353 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1014 14:29:24.296087   43353 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1014 14:29:24.296092   43353 command_runner.go:130] > # which might increase security.
	I1014 14:29:24.296100   43353 command_runner.go:130] > # This option is currently deprecated,
	I1014 14:29:24.296115   43353 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1014 14:29:24.296121   43353 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1014 14:29:24.296133   43353 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1014 14:29:24.296143   43353 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1014 14:29:24.296152   43353 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1014 14:29:24.296158   43353 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1014 14:29:24.296165   43353 command_runner.go:130] > # This option supports live configuration reload.
	I1014 14:29:24.296170   43353 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1014 14:29:24.296175   43353 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1014 14:29:24.296181   43353 command_runner.go:130] > # the cgroup blockio controller.
	I1014 14:29:24.296190   43353 command_runner.go:130] > # blockio_config_file = ""
	I1014 14:29:24.296199   43353 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1014 14:29:24.296204   43353 command_runner.go:130] > # blockio parameters.
	I1014 14:29:24.296208   43353 command_runner.go:130] > # blockio_reload = false
	I1014 14:29:24.296216   43353 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1014 14:29:24.296221   43353 command_runner.go:130] > # irqbalance daemon.
	I1014 14:29:24.296226   43353 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1014 14:29:24.296234   43353 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1014 14:29:24.296242   43353 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1014 14:29:24.296249   43353 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1014 14:29:24.296254   43353 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1014 14:29:24.296262   43353 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1014 14:29:24.296269   43353 command_runner.go:130] > # This option supports live configuration reload.
	I1014 14:29:24.296273   43353 command_runner.go:130] > # rdt_config_file = ""
	I1014 14:29:24.296280   43353 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1014 14:29:24.296284   43353 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1014 14:29:24.296314   43353 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1014 14:29:24.296321   43353 command_runner.go:130] > # separate_pull_cgroup = ""
	I1014 14:29:24.296327   43353 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1014 14:29:24.296332   43353 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1014 14:29:24.296336   43353 command_runner.go:130] > # will be added.
	I1014 14:29:24.296340   43353 command_runner.go:130] > # default_capabilities = [
	I1014 14:29:24.296345   43353 command_runner.go:130] > # 	"CHOWN",
	I1014 14:29:24.296348   43353 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1014 14:29:24.296354   43353 command_runner.go:130] > # 	"FSETID",
	I1014 14:29:24.296362   43353 command_runner.go:130] > # 	"FOWNER",
	I1014 14:29:24.296370   43353 command_runner.go:130] > # 	"SETGID",
	I1014 14:29:24.296378   43353 command_runner.go:130] > # 	"SETUID",
	I1014 14:29:24.296392   43353 command_runner.go:130] > # 	"SETPCAP",
	I1014 14:29:24.296401   43353 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1014 14:29:24.296409   43353 command_runner.go:130] > # 	"KILL",
	I1014 14:29:24.296415   43353 command_runner.go:130] > # ]
	I1014 14:29:24.296428   43353 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1014 14:29:24.296440   43353 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1014 14:29:24.296451   43353 command_runner.go:130] > # add_inheritable_capabilities = false
	I1014 14:29:24.296463   43353 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1014 14:29:24.296473   43353 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1014 14:29:24.296479   43353 command_runner.go:130] > default_sysctls = [
	I1014 14:29:24.296484   43353 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1014 14:29:24.296489   43353 command_runner.go:130] > ]
	I1014 14:29:24.296494   43353 command_runner.go:130] > # List of devices on the host that a
	I1014 14:29:24.296505   43353 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1014 14:29:24.296514   43353 command_runner.go:130] > # allowed_devices = [
	I1014 14:29:24.296521   43353 command_runner.go:130] > # 	"/dev/fuse",
	I1014 14:29:24.296528   43353 command_runner.go:130] > # ]
	I1014 14:29:24.296536   43353 command_runner.go:130] > # List of additional devices. specified as
	I1014 14:29:24.296550   43353 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1014 14:29:24.296560   43353 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1014 14:29:24.296572   43353 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1014 14:29:24.296582   43353 command_runner.go:130] > # additional_devices = [
	I1014 14:29:24.296589   43353 command_runner.go:130] > # ]
	I1014 14:29:24.296597   43353 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1014 14:29:24.296605   43353 command_runner.go:130] > # cdi_spec_dirs = [
	I1014 14:29:24.296612   43353 command_runner.go:130] > # 	"/etc/cdi",
	I1014 14:29:24.296617   43353 command_runner.go:130] > # 	"/var/run/cdi",
	I1014 14:29:24.296624   43353 command_runner.go:130] > # ]
	I1014 14:29:24.296634   43353 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1014 14:29:24.296646   43353 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1014 14:29:24.296662   43353 command_runner.go:130] > # Defaults to false.
	I1014 14:29:24.296673   43353 command_runner.go:130] > # device_ownership_from_security_context = false
	I1014 14:29:24.296686   43353 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1014 14:29:24.296698   43353 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1014 14:29:24.296707   43353 command_runner.go:130] > # hooks_dir = [
	I1014 14:29:24.296716   43353 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1014 14:29:24.296722   43353 command_runner.go:130] > # ]
	I1014 14:29:24.296734   43353 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1014 14:29:24.296746   43353 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1014 14:29:24.296757   43353 command_runner.go:130] > # its default mounts from the following two files:
	I1014 14:29:24.296762   43353 command_runner.go:130] > #
	I1014 14:29:24.296773   43353 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1014 14:29:24.296786   43353 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1014 14:29:24.296796   43353 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1014 14:29:24.296804   43353 command_runner.go:130] > #
	I1014 14:29:24.296812   43353 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1014 14:29:24.296824   43353 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1014 14:29:24.296836   43353 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1014 14:29:24.296847   43353 command_runner.go:130] > #      only add mounts it finds in this file.
	I1014 14:29:24.296852   43353 command_runner.go:130] > #
	I1014 14:29:24.296861   43353 command_runner.go:130] > # default_mounts_file = ""
	I1014 14:29:24.296869   43353 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1014 14:29:24.296881   43353 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1014 14:29:24.296890   43353 command_runner.go:130] > pids_limit = 1024
	I1014 14:29:24.296902   43353 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1014 14:29:24.296913   43353 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1014 14:29:24.296925   43353 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1014 14:29:24.296940   43353 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1014 14:29:24.296949   43353 command_runner.go:130] > # log_size_max = -1
	I1014 14:29:24.296962   43353 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1014 14:29:24.296971   43353 command_runner.go:130] > # log_to_journald = false
	I1014 14:29:24.296982   43353 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1014 14:29:24.296993   43353 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1014 14:29:24.297011   43353 command_runner.go:130] > # Path to directory for container attach sockets.
	I1014 14:29:24.297021   43353 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1014 14:29:24.297030   43353 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1014 14:29:24.297039   43353 command_runner.go:130] > # bind_mount_prefix = ""
	I1014 14:29:24.297047   43353 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1014 14:29:24.297055   43353 command_runner.go:130] > # read_only = false
	I1014 14:29:24.297064   43353 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1014 14:29:24.297076   43353 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1014 14:29:24.297085   43353 command_runner.go:130] > # live configuration reload.
	I1014 14:29:24.297091   43353 command_runner.go:130] > # log_level = "info"
	I1014 14:29:24.297102   43353 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1014 14:29:24.297112   43353 command_runner.go:130] > # This option supports live configuration reload.
	I1014 14:29:24.297120   43353 command_runner.go:130] > # log_filter = ""
	I1014 14:29:24.297132   43353 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1014 14:29:24.297145   43353 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1014 14:29:24.297154   43353 command_runner.go:130] > # separated by comma.
	I1014 14:29:24.297168   43353 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1014 14:29:24.297177   43353 command_runner.go:130] > # uid_mappings = ""
	I1014 14:29:24.297188   43353 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1014 14:29:24.297198   43353 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1014 14:29:24.297207   43353 command_runner.go:130] > # separated by comma.
	I1014 14:29:24.297217   43353 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1014 14:29:24.297224   43353 command_runner.go:130] > # gid_mappings = ""
	I1014 14:29:24.297229   43353 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1014 14:29:24.297237   43353 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1014 14:29:24.297245   43353 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1014 14:29:24.297251   43353 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1014 14:29:24.297258   43353 command_runner.go:130] > # minimum_mappable_uid = -1
	I1014 14:29:24.297263   43353 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1014 14:29:24.297271   43353 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1014 14:29:24.297277   43353 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1014 14:29:24.297285   43353 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1014 14:29:24.297292   43353 command_runner.go:130] > # minimum_mappable_gid = -1
	I1014 14:29:24.297303   43353 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1014 14:29:24.297311   43353 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1014 14:29:24.297318   43353 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1014 14:29:24.297322   43353 command_runner.go:130] > # ctr_stop_timeout = 30
	I1014 14:29:24.297328   43353 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1014 14:29:24.297336   43353 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1014 14:29:24.297340   43353 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1014 14:29:24.297347   43353 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1014 14:29:24.297351   43353 command_runner.go:130] > drop_infra_ctr = false
	I1014 14:29:24.297359   43353 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1014 14:29:24.297364   43353 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1014 14:29:24.297373   43353 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1014 14:29:24.297379   43353 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1014 14:29:24.297385   43353 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1014 14:29:24.297397   43353 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1014 14:29:24.297402   43353 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1014 14:29:24.297409   43353 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1014 14:29:24.297413   43353 command_runner.go:130] > # shared_cpuset = ""
	I1014 14:29:24.297421   43353 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1014 14:29:24.297426   43353 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1014 14:29:24.297432   43353 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1014 14:29:24.297439   43353 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1014 14:29:24.297445   43353 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1014 14:29:24.297450   43353 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1014 14:29:24.297457   43353 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1014 14:29:24.297463   43353 command_runner.go:130] > # enable_criu_support = false
	I1014 14:29:24.297469   43353 command_runner.go:130] > # Enable/disable the generation of the container,
	I1014 14:29:24.297479   43353 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1014 14:29:24.297485   43353 command_runner.go:130] > # enable_pod_events = false
	I1014 14:29:24.297494   43353 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1014 14:29:24.297502   43353 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1014 14:29:24.297507   43353 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1014 14:29:24.297513   43353 command_runner.go:130] > # default_runtime = "runc"
	I1014 14:29:24.297522   43353 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1014 14:29:24.297531   43353 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1014 14:29:24.297540   43353 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1014 14:29:24.297547   43353 command_runner.go:130] > # creation as a file is not desired either.
	I1014 14:29:24.297555   43353 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1014 14:29:24.297568   43353 command_runner.go:130] > # the hostname is being managed dynamically.
	I1014 14:29:24.297575   43353 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1014 14:29:24.297578   43353 command_runner.go:130] > # ]
	I1014 14:29:24.297585   43353 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1014 14:29:24.297592   43353 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1014 14:29:24.297599   43353 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1014 14:29:24.297607   43353 command_runner.go:130] > # Each entry in the table should follow the format:
	I1014 14:29:24.297610   43353 command_runner.go:130] > #
	I1014 14:29:24.297615   43353 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1014 14:29:24.297621   43353 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1014 14:29:24.297664   43353 command_runner.go:130] > # runtime_type = "oci"
	I1014 14:29:24.297670   43353 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1014 14:29:24.297675   43353 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1014 14:29:24.297681   43353 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1014 14:29:24.297686   43353 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1014 14:29:24.297691   43353 command_runner.go:130] > # monitor_env = []
	I1014 14:29:24.297696   43353 command_runner.go:130] > # privileged_without_host_devices = false
	I1014 14:29:24.297700   43353 command_runner.go:130] > # allowed_annotations = []
	I1014 14:29:24.297707   43353 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1014 14:29:24.297712   43353 command_runner.go:130] > # Where:
	I1014 14:29:24.297723   43353 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1014 14:29:24.297732   43353 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1014 14:29:24.297744   43353 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1014 14:29:24.297753   43353 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1014 14:29:24.297761   43353 command_runner.go:130] > #   in $PATH.
	I1014 14:29:24.297771   43353 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1014 14:29:24.297781   43353 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1014 14:29:24.297790   43353 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1014 14:29:24.297804   43353 command_runner.go:130] > #   state.
	I1014 14:29:24.297814   43353 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1014 14:29:24.297822   43353 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1014 14:29:24.297828   43353 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1014 14:29:24.297835   43353 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1014 14:29:24.297841   43353 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1014 14:29:24.297849   43353 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1014 14:29:24.297854   43353 command_runner.go:130] > #   The currently recognized values are:
	I1014 14:29:24.297862   43353 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1014 14:29:24.297869   43353 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1014 14:29:24.297877   43353 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1014 14:29:24.297885   43353 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1014 14:29:24.297894   43353 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1014 14:29:24.297902   43353 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1014 14:29:24.297909   43353 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1014 14:29:24.297917   43353 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1014 14:29:24.297926   43353 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1014 14:29:24.297932   43353 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1014 14:29:24.297938   43353 command_runner.go:130] > #   deprecated option "conmon".
	I1014 14:29:24.297944   43353 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1014 14:29:24.297951   43353 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1014 14:29:24.297957   43353 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1014 14:29:24.297964   43353 command_runner.go:130] > #   should be moved to the container's cgroup
	I1014 14:29:24.297970   43353 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1014 14:29:24.297977   43353 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1014 14:29:24.297983   43353 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1014 14:29:24.297989   43353 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1014 14:29:24.297993   43353 command_runner.go:130] > #
	I1014 14:29:24.297999   43353 command_runner.go:130] > # Using the seccomp notifier feature:
	I1014 14:29:24.298003   43353 command_runner.go:130] > #
	I1014 14:29:24.298011   43353 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1014 14:29:24.298019   43353 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1014 14:29:24.298024   43353 command_runner.go:130] > #
	I1014 14:29:24.298034   43353 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1014 14:29:24.298042   43353 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1014 14:29:24.298048   43353 command_runner.go:130] > #
	I1014 14:29:24.298054   43353 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1014 14:29:24.298059   43353 command_runner.go:130] > # feature.
	I1014 14:29:24.298063   43353 command_runner.go:130] > #
	I1014 14:29:24.298070   43353 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1014 14:29:24.298076   43353 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1014 14:29:24.298084   43353 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1014 14:29:24.298092   43353 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1014 14:29:24.298100   43353 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1014 14:29:24.298103   43353 command_runner.go:130] > #
	I1014 14:29:24.298110   43353 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1014 14:29:24.298116   43353 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1014 14:29:24.298121   43353 command_runner.go:130] > #
	I1014 14:29:24.298127   43353 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1014 14:29:24.298134   43353 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1014 14:29:24.298141   43353 command_runner.go:130] > #
	I1014 14:29:24.298146   43353 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1014 14:29:24.298154   43353 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1014 14:29:24.298160   43353 command_runner.go:130] > # limitation.
	I1014 14:29:24.298165   43353 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1014 14:29:24.298171   43353 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1014 14:29:24.298175   43353 command_runner.go:130] > runtime_type = "oci"
	I1014 14:29:24.298182   43353 command_runner.go:130] > runtime_root = "/run/runc"
	I1014 14:29:24.298186   43353 command_runner.go:130] > runtime_config_path = ""
	I1014 14:29:24.298192   43353 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1014 14:29:24.298196   43353 command_runner.go:130] > monitor_cgroup = "pod"
	I1014 14:29:24.298200   43353 command_runner.go:130] > monitor_exec_cgroup = ""
	I1014 14:29:24.298211   43353 command_runner.go:130] > monitor_env = [
	I1014 14:29:24.298219   43353 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1014 14:29:24.298223   43353 command_runner.go:130] > ]
	I1014 14:29:24.298227   43353 command_runner.go:130] > privileged_without_host_devices = false
	I1014 14:29:24.298323   43353 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1014 14:29:24.298643   43353 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1014 14:29:24.298665   43353 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1014 14:29:24.298679   43353 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1014 14:29:24.298699   43353 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1014 14:29:24.298709   43353 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1014 14:29:24.298732   43353 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1014 14:29:24.298751   43353 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1014 14:29:24.298761   43353 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1014 14:29:24.298773   43353 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1014 14:29:24.298784   43353 command_runner.go:130] > # Example:
	I1014 14:29:24.298791   43353 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1014 14:29:24.298799   43353 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1014 14:29:24.298807   43353 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1014 14:29:24.298821   43353 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1014 14:29:24.298826   43353 command_runner.go:130] > # cpuset = 0
	I1014 14:29:24.298833   43353 command_runner.go:130] > # cpushares = "0-1"
	I1014 14:29:24.298837   43353 command_runner.go:130] > # Where:
	I1014 14:29:24.298845   43353 command_runner.go:130] > # The workload name is workload-type.
	I1014 14:29:24.298862   43353 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1014 14:29:24.298870   43353 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1014 14:29:24.298879   43353 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1014 14:29:24.298898   43353 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1014 14:29:24.298911   43353 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1014 14:29:24.298925   43353 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1014 14:29:24.298937   43353 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1014 14:29:24.298948   43353 command_runner.go:130] > # Default value is set to true
	I1014 14:29:24.298959   43353 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1014 14:29:24.298973   43353 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1014 14:29:24.298983   43353 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1014 14:29:24.298990   43353 command_runner.go:130] > # Default value is set to 'false'
	I1014 14:29:24.298999   43353 command_runner.go:130] > # disable_hostport_mapping = false
	I1014 14:29:24.299014   43353 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1014 14:29:24.299036   43353 command_runner.go:130] > #
	I1014 14:29:24.299049   43353 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1014 14:29:24.299067   43353 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1014 14:29:24.299079   43353 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1014 14:29:24.299091   43353 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1014 14:29:24.299105   43353 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1014 14:29:24.299111   43353 command_runner.go:130] > [crio.image]
	I1014 14:29:24.299121   43353 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1014 14:29:24.299136   43353 command_runner.go:130] > # default_transport = "docker://"
	I1014 14:29:24.299148   43353 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1014 14:29:24.299161   43353 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1014 14:29:24.299176   43353 command_runner.go:130] > # global_auth_file = ""
	I1014 14:29:24.299186   43353 command_runner.go:130] > # The image used to instantiate infra containers.
	I1014 14:29:24.299195   43353 command_runner.go:130] > # This option supports live configuration reload.
	I1014 14:29:24.299203   43353 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1014 14:29:24.299217   43353 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1014 14:29:24.299228   43353 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1014 14:29:24.299239   43353 command_runner.go:130] > # This option supports live configuration reload.
	I1014 14:29:24.299250   43353 command_runner.go:130] > # pause_image_auth_file = ""
	I1014 14:29:24.299261   43353 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1014 14:29:24.299273   43353 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1014 14:29:24.299287   43353 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1014 14:29:24.299296   43353 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1014 14:29:24.299306   43353 command_runner.go:130] > # pause_command = "/pause"
	I1014 14:29:24.299320   43353 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1014 14:29:24.299331   43353 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1014 14:29:24.299343   43353 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1014 14:29:24.299366   43353 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1014 14:29:24.299376   43353 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1014 14:29:24.299396   43353 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1014 14:29:24.299408   43353 command_runner.go:130] > # pinned_images = [
	I1014 14:29:24.299426   43353 command_runner.go:130] > # ]
	I1014 14:29:24.299438   43353 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1014 14:29:24.299465   43353 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1014 14:29:24.299475   43353 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1014 14:29:24.299487   43353 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1014 14:29:24.299503   43353 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1014 14:29:24.299512   43353 command_runner.go:130] > # signature_policy = ""
	I1014 14:29:24.299521   43353 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1014 14:29:24.299539   43353 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1014 14:29:24.299555   43353 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1014 14:29:24.299565   43353 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1014 14:29:24.299580   43353 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1014 14:29:24.299592   43353 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1014 14:29:24.299604   43353 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1014 14:29:24.299621   43353 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1014 14:29:24.299636   43353 command_runner.go:130] > # changing them here.
	I1014 14:29:24.299659   43353 command_runner.go:130] > # insecure_registries = [
	I1014 14:29:24.299666   43353 command_runner.go:130] > # ]
	I1014 14:29:24.299679   43353 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1014 14:29:24.299695   43353 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1014 14:29:24.299702   43353 command_runner.go:130] > # image_volumes = "mkdir"
	I1014 14:29:24.299710   43353 command_runner.go:130] > # Temporary directory to use for storing big files
	I1014 14:29:24.299717   43353 command_runner.go:130] > # big_files_temporary_dir = ""
	I1014 14:29:24.299731   43353 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1014 14:29:24.299737   43353 command_runner.go:130] > # CNI plugins.
	I1014 14:29:24.299743   43353 command_runner.go:130] > [crio.network]
	I1014 14:29:24.299751   43353 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1014 14:29:24.299764   43353 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1014 14:29:24.299771   43353 command_runner.go:130] > # cni_default_network = ""
	I1014 14:29:24.299780   43353 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1014 14:29:24.299859   43353 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1014 14:29:24.299915   43353 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1014 14:29:24.300234   43353 command_runner.go:130] > # plugin_dirs = [
	I1014 14:29:24.300247   43353 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1014 14:29:24.300252   43353 command_runner.go:130] > # ]
	I1014 14:29:24.300261   43353 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1014 14:29:24.300267   43353 command_runner.go:130] > [crio.metrics]
	I1014 14:29:24.300276   43353 command_runner.go:130] > # Globally enable or disable metrics support.
	I1014 14:29:24.300286   43353 command_runner.go:130] > enable_metrics = true
	I1014 14:29:24.300296   43353 command_runner.go:130] > # Specify enabled metrics collectors.
	I1014 14:29:24.300306   43353 command_runner.go:130] > # Per default all metrics are enabled.
	I1014 14:29:24.300315   43353 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1014 14:29:24.300327   43353 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1014 14:29:24.300338   43353 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1014 14:29:24.300347   43353 command_runner.go:130] > # metrics_collectors = [
	I1014 14:29:24.300353   43353 command_runner.go:130] > # 	"operations",
	I1014 14:29:24.300363   43353 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1014 14:29:24.300370   43353 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1014 14:29:24.300380   43353 command_runner.go:130] > # 	"operations_errors",
	I1014 14:29:24.300387   43353 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1014 14:29:24.300396   43353 command_runner.go:130] > # 	"image_pulls_by_name",
	I1014 14:29:24.300403   43353 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1014 14:29:24.300426   43353 command_runner.go:130] > # 	"image_pulls_failures",
	I1014 14:29:24.300437   43353 command_runner.go:130] > # 	"image_pulls_successes",
	I1014 14:29:24.300443   43353 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1014 14:29:24.300449   43353 command_runner.go:130] > # 	"image_layer_reuse",
	I1014 14:29:24.300459   43353 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1014 14:29:24.300465   43353 command_runner.go:130] > # 	"containers_oom_total",
	I1014 14:29:24.300474   43353 command_runner.go:130] > # 	"containers_oom",
	I1014 14:29:24.300480   43353 command_runner.go:130] > # 	"processes_defunct",
	I1014 14:29:24.300489   43353 command_runner.go:130] > # 	"operations_total",
	I1014 14:29:24.300496   43353 command_runner.go:130] > # 	"operations_latency_seconds",
	I1014 14:29:24.300505   43353 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1014 14:29:24.300512   43353 command_runner.go:130] > # 	"operations_errors_total",
	I1014 14:29:24.300521   43353 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1014 14:29:24.300531   43353 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1014 14:29:24.300539   43353 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1014 14:29:24.300548   43353 command_runner.go:130] > # 	"image_pulls_success_total",
	I1014 14:29:24.300556   43353 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1014 14:29:24.300565   43353 command_runner.go:130] > # 	"containers_oom_count_total",
	I1014 14:29:24.300575   43353 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1014 14:29:24.300585   43353 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1014 14:29:24.300590   43353 command_runner.go:130] > # ]
	I1014 14:29:24.300600   43353 command_runner.go:130] > # The port on which the metrics server will listen.
	I1014 14:29:24.300606   43353 command_runner.go:130] > # metrics_port = 9090
	I1014 14:29:24.300617   43353 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1014 14:29:24.300627   43353 command_runner.go:130] > # metrics_socket = ""
	I1014 14:29:24.300634   43353 command_runner.go:130] > # The certificate for the secure metrics server.
	I1014 14:29:24.300646   43353 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1014 14:29:24.300656   43353 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1014 14:29:24.300665   43353 command_runner.go:130] > # certificate on any modification event.
	I1014 14:29:24.300676   43353 command_runner.go:130] > # metrics_cert = ""
	I1014 14:29:24.300685   43353 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1014 14:29:24.300695   43353 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1014 14:29:24.300705   43353 command_runner.go:130] > # metrics_key = ""
	I1014 14:29:24.300713   43353 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1014 14:29:24.300721   43353 command_runner.go:130] > [crio.tracing]
	I1014 14:29:24.300730   43353 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1014 14:29:24.300740   43353 command_runner.go:130] > # enable_tracing = false
	I1014 14:29:24.300748   43353 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1014 14:29:24.300755   43353 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1014 14:29:24.300767   43353 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1014 14:29:24.300774   43353 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1014 14:29:24.300781   43353 command_runner.go:130] > # CRI-O NRI configuration.
	I1014 14:29:24.300789   43353 command_runner.go:130] > [crio.nri]
	I1014 14:29:24.300798   43353 command_runner.go:130] > # Globally enable or disable NRI.
	I1014 14:29:24.300806   43353 command_runner.go:130] > # enable_nri = false
	I1014 14:29:24.300813   43353 command_runner.go:130] > # NRI socket to listen on.
	I1014 14:29:24.300823   43353 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1014 14:29:24.300829   43353 command_runner.go:130] > # NRI plugin directory to use.
	I1014 14:29:24.300836   43353 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1014 14:29:24.300847   43353 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1014 14:29:24.300858   43353 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1014 14:29:24.300869   43353 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1014 14:29:24.300879   43353 command_runner.go:130] > # nri_disable_connections = false
	I1014 14:29:24.300890   43353 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1014 14:29:24.300904   43353 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1014 14:29:24.300922   43353 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1014 14:29:24.300936   43353 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1014 14:29:24.300949   43353 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1014 14:29:24.300955   43353 command_runner.go:130] > [crio.stats]
	I1014 14:29:24.300967   43353 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1014 14:29:24.300978   43353 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1014 14:29:24.300987   43353 command_runner.go:130] > # stats_collection_period = 0
	I1014 14:29:24.301068   43353 cni.go:84] Creating CNI manager for ""
	I1014 14:29:24.301079   43353 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1014 14:29:24.301088   43353 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 14:29:24.301106   43353 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.46 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-740856 NodeName:multinode-740856 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.46"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.46 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 14:29:24.301237   43353 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.46
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-740856"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.46"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.46"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 14:29:24.301290   43353 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 14:29:24.312738   43353 command_runner.go:130] > kubeadm
	I1014 14:29:24.312753   43353 command_runner.go:130] > kubectl
	I1014 14:29:24.312757   43353 command_runner.go:130] > kubelet
	I1014 14:29:24.312908   43353 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 14:29:24.312959   43353 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 14:29:24.322794   43353 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I1014 14:29:24.341613   43353 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 14:29:24.360060   43353 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1014 14:29:24.378483   43353 ssh_runner.go:195] Run: grep 192.168.39.46	control-plane.minikube.internal$ /etc/hosts
	I1014 14:29:24.382665   43353 command_runner.go:130] > 192.168.39.46	control-plane.minikube.internal
	I1014 14:29:24.382734   43353 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 14:29:24.532787   43353 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 14:29:24.547537   43353 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/multinode-740856 for IP: 192.168.39.46
	I1014 14:29:24.547566   43353 certs.go:194] generating shared ca certs ...
	I1014 14:29:24.547583   43353 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 14:29:24.547851   43353 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 14:29:24.547917   43353 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 14:29:24.547929   43353 certs.go:256] generating profile certs ...
	I1014 14:29:24.548007   43353 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/multinode-740856/client.key
	I1014 14:29:24.548070   43353 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/multinode-740856/apiserver.key.eae55c26
	I1014 14:29:24.548133   43353 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/multinode-740856/proxy-client.key
	I1014 14:29:24.548145   43353 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1014 14:29:24.548162   43353 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1014 14:29:24.548173   43353 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1014 14:29:24.548184   43353 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1014 14:29:24.548195   43353 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/multinode-740856/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1014 14:29:24.548207   43353 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/multinode-740856/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1014 14:29:24.548217   43353 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/multinode-740856/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1014 14:29:24.548229   43353 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/multinode-740856/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1014 14:29:24.548279   43353 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 14:29:24.548309   43353 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 14:29:24.548322   43353 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 14:29:24.548346   43353 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 14:29:24.548367   43353 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 14:29:24.548387   43353 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 14:29:24.548422   43353 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 14:29:24.548448   43353 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1014 14:29:24.548460   43353 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem -> /usr/share/ca-certificates/15023.pem
	I1014 14:29:24.548472   43353 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> /usr/share/ca-certificates/150232.pem
	I1014 14:29:24.549009   43353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 14:29:24.573312   43353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 14:29:24.596589   43353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 14:29:24.620150   43353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 14:29:24.643170   43353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/multinode-740856/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 14:29:24.666964   43353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/multinode-740856/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 14:29:24.690231   43353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/multinode-740856/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 14:29:24.713810   43353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/multinode-740856/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 14:29:24.737078   43353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 14:29:24.760628   43353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 14:29:24.784289   43353 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 14:29:24.807254   43353 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 14:29:24.823857   43353 ssh_runner.go:195] Run: openssl version
	I1014 14:29:24.829673   43353 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1014 14:29:24.829742   43353 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 14:29:24.840383   43353 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 14:29:24.844886   43353 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 14:29:24.844911   43353 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 14:29:24.844942   43353 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 14:29:24.850488   43353 command_runner.go:130] > 3ec20f2e
	I1014 14:29:24.850543   43353 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 14:29:24.859651   43353 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 14:29:24.870032   43353 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 14:29:24.874453   43353 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 14:29:24.874475   43353 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 14:29:24.874507   43353 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 14:29:24.880084   43353 command_runner.go:130] > b5213941
	I1014 14:29:24.880137   43353 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 14:29:24.889823   43353 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 14:29:24.900169   43353 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 14:29:24.904582   43353 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 14:29:24.904659   43353 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 14:29:24.904706   43353 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 14:29:24.911484   43353 command_runner.go:130] > 51391683
	I1014 14:29:24.911548   43353 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 14:29:24.921782   43353 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 14:29:24.927170   43353 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 14:29:24.927194   43353 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1014 14:29:24.927201   43353 command_runner.go:130] > Device: 253,1	Inode: 8384040     Links: 1
	I1014 14:29:24.927211   43353 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1014 14:29:24.927237   43353 command_runner.go:130] > Access: 2024-10-14 14:22:44.982122401 +0000
	I1014 14:29:24.927245   43353 command_runner.go:130] > Modify: 2024-10-14 14:22:44.982122401 +0000
	I1014 14:29:24.927256   43353 command_runner.go:130] > Change: 2024-10-14 14:22:44.982122401 +0000
	I1014 14:29:24.927264   43353 command_runner.go:130] >  Birth: 2024-10-14 14:22:44.982122401 +0000
	I1014 14:29:24.927319   43353 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 14:29:24.933342   43353 command_runner.go:130] > Certificate will not expire
	I1014 14:29:24.933544   43353 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 14:29:24.939544   43353 command_runner.go:130] > Certificate will not expire
	I1014 14:29:24.939742   43353 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 14:29:24.945533   43353 command_runner.go:130] > Certificate will not expire
	I1014 14:29:24.945739   43353 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 14:29:24.951529   43353 command_runner.go:130] > Certificate will not expire
	I1014 14:29:24.951582   43353 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 14:29:24.957381   43353 command_runner.go:130] > Certificate will not expire
	I1014 14:29:24.957434   43353 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 14:29:24.962769   43353 command_runner.go:130] > Certificate will not expire
	I1014 14:29:24.963024   43353 kubeadm.go:392] StartCluster: {Name:multinode-740856 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
1 ClusterName:multinode-740856 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.46 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.81 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.82 Port:0 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 14:29:24.963140   43353 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 14:29:24.963190   43353 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 14:29:25.001844   43353 command_runner.go:130] > 7e72e1d43f5366c2f70993fa3a08a337f4b8c801487f277583a88f020cc11c10
	I1014 14:29:25.001866   43353 command_runner.go:130] > 4019b738ca4f28861ffdb40677a8c4ce2550b9baebbe4a973d64492a1dcfbff7
	I1014 14:29:25.001875   43353 command_runner.go:130] > 87470ac4eaca438897add9e375dddcb650dbd05770f45d94edbf259e60468db6
	I1014 14:29:25.001909   43353 command_runner.go:130] > 4da188fe9b0f370d054f6beffd8587a8a9fdd4d0c2db1539e2e595a5d4b9c871
	I1014 14:29:25.001985   43353 command_runner.go:130] > 1bfc55554fe6c8a963f799027a265d188484388cf15d0b2b3bcc52c6c7cf7095
	I1014 14:29:25.002017   43353 command_runner.go:130] > 4bbdb50a55e79de8cf1540f5fbfb9948a264aa06008c04a0626f7e3d01673693
	I1014 14:29:25.002104   43353 command_runner.go:130] > f6820157a4a338a4a4df260165d393247a734ce3fbdd5e6b2eb87de2723c7f8a
	I1014 14:29:25.002166   43353 command_runner.go:130] > 377bf132ef7fe09e3d871ef952d1b4b9127a4d4f7f85dc193eaa78062b662ab0
	I1014 14:29:25.003622   43353 cri.go:89] found id: "7e72e1d43f5366c2f70993fa3a08a337f4b8c801487f277583a88f020cc11c10"
	I1014 14:29:25.003639   43353 cri.go:89] found id: "4019b738ca4f28861ffdb40677a8c4ce2550b9baebbe4a973d64492a1dcfbff7"
	I1014 14:29:25.003642   43353 cri.go:89] found id: "87470ac4eaca438897add9e375dddcb650dbd05770f45d94edbf259e60468db6"
	I1014 14:29:25.003646   43353 cri.go:89] found id: "4da188fe9b0f370d054f6beffd8587a8a9fdd4d0c2db1539e2e595a5d4b9c871"
	I1014 14:29:25.003648   43353 cri.go:89] found id: "1bfc55554fe6c8a963f799027a265d188484388cf15d0b2b3bcc52c6c7cf7095"
	I1014 14:29:25.003652   43353 cri.go:89] found id: "4bbdb50a55e79de8cf1540f5fbfb9948a264aa06008c04a0626f7e3d01673693"
	I1014 14:29:25.003654   43353 cri.go:89] found id: "f6820157a4a338a4a4df260165d393247a734ce3fbdd5e6b2eb87de2723c7f8a"
	I1014 14:29:25.003657   43353 cri.go:89] found id: "377bf132ef7fe09e3d871ef952d1b4b9127a4d4f7f85dc193eaa78062b662ab0"
	I1014 14:29:25.003659   43353 cri.go:89] found id: ""
	I1014 14:29:25.003699   43353 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-740856 -n multinode-740856
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-740856 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (145.26s)

                                                
                                    
x
+
TestPreload (165.66s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-675136 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E1014 14:38:36.994151   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/functional-917108/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-675136 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m30.613185643s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-675136 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-675136 image pull gcr.io/k8s-minikube/busybox: (2.710990253s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-675136
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-675136: (7.287789338s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-675136 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-675136 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m2.152481758s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-675136 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2024-10-14 14:40:12.004495035 +0000 UTC m=+3696.285843391
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-675136 -n test-preload-675136
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-675136 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-675136 logs -n 25: (1.104688962s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-740856 ssh -n                                                                 | multinode-740856     | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | multinode-740856-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-740856 ssh -n multinode-740856 sudo cat                                       | multinode-740856     | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | /home/docker/cp-test_multinode-740856-m03_multinode-740856.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-740856 cp multinode-740856-m03:/home/docker/cp-test.txt                       | multinode-740856     | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | multinode-740856-m02:/home/docker/cp-test_multinode-740856-m03_multinode-740856-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-740856 ssh -n                                                                 | multinode-740856     | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | multinode-740856-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-740856 ssh -n multinode-740856-m02 sudo cat                                   | multinode-740856     | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | /home/docker/cp-test_multinode-740856-m03_multinode-740856-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-740856 node stop m03                                                          | multinode-740856     | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	| node    | multinode-740856 node start                                                             | multinode-740856     | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC | 14 Oct 24 14:25 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-740856                                                                | multinode-740856     | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC |                     |
	| stop    | -p multinode-740856                                                                     | multinode-740856     | jenkins | v1.34.0 | 14 Oct 24 14:25 UTC |                     |
	| start   | -p multinode-740856                                                                     | multinode-740856     | jenkins | v1.34.0 | 14 Oct 24 14:27 UTC | 14 Oct 24 14:31 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-740856                                                                | multinode-740856     | jenkins | v1.34.0 | 14 Oct 24 14:31 UTC |                     |
	| node    | multinode-740856 node delete                                                            | multinode-740856     | jenkins | v1.34.0 | 14 Oct 24 14:31 UTC | 14 Oct 24 14:31 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-740856 stop                                                                   | multinode-740856     | jenkins | v1.34.0 | 14 Oct 24 14:31 UTC |                     |
	| start   | -p multinode-740856                                                                     | multinode-740856     | jenkins | v1.34.0 | 14 Oct 24 14:33 UTC | 14 Oct 24 14:36 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-740856                                                                | multinode-740856     | jenkins | v1.34.0 | 14 Oct 24 14:36 UTC |                     |
	| start   | -p multinode-740856-m02                                                                 | multinode-740856-m02 | jenkins | v1.34.0 | 14 Oct 24 14:36 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-740856-m03                                                                 | multinode-740856-m03 | jenkins | v1.34.0 | 14 Oct 24 14:36 UTC | 14 Oct 24 14:37 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-740856                                                                 | multinode-740856     | jenkins | v1.34.0 | 14 Oct 24 14:37 UTC |                     |
	| delete  | -p multinode-740856-m03                                                                 | multinode-740856-m03 | jenkins | v1.34.0 | 14 Oct 24 14:37 UTC | 14 Oct 24 14:37 UTC |
	| delete  | -p multinode-740856                                                                     | multinode-740856     | jenkins | v1.34.0 | 14 Oct 24 14:37 UTC | 14 Oct 24 14:37 UTC |
	| start   | -p test-preload-675136                                                                  | test-preload-675136  | jenkins | v1.34.0 | 14 Oct 24 14:37 UTC | 14 Oct 24 14:38 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-675136 image pull                                                          | test-preload-675136  | jenkins | v1.34.0 | 14 Oct 24 14:38 UTC | 14 Oct 24 14:39 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-675136                                                                  | test-preload-675136  | jenkins | v1.34.0 | 14 Oct 24 14:39 UTC | 14 Oct 24 14:39 UTC |
	| start   | -p test-preload-675136                                                                  | test-preload-675136  | jenkins | v1.34.0 | 14 Oct 24 14:39 UTC | 14 Oct 24 14:40 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-675136 image list                                                          | test-preload-675136  | jenkins | v1.34.0 | 14 Oct 24 14:40 UTC | 14 Oct 24 14:40 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 14:39:09
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 14:39:09.680558   47690 out.go:345] Setting OutFile to fd 1 ...
	I1014 14:39:09.680646   47690 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:39:09.680650   47690 out.go:358] Setting ErrFile to fd 2...
	I1014 14:39:09.680654   47690 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:39:09.680817   47690 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
	I1014 14:39:09.681302   47690 out.go:352] Setting JSON to false
	I1014 14:39:09.682149   47690 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":4900,"bootTime":1728911850,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 14:39:09.682240   47690 start.go:139] virtualization: kvm guest
	I1014 14:39:09.684664   47690 out.go:177] * [test-preload-675136] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1014 14:39:09.686144   47690 notify.go:220] Checking for updates...
	I1014 14:39:09.686154   47690 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 14:39:09.687345   47690 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 14:39:09.688749   47690 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 14:39:09.689976   47690 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 14:39:09.691474   47690 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 14:39:09.692745   47690 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 14:39:09.694504   47690 config.go:182] Loaded profile config "test-preload-675136": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1014 14:39:09.695103   47690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:39:09.695178   47690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:39:09.709843   47690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42199
	I1014 14:39:09.710297   47690 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:39:09.710860   47690 main.go:141] libmachine: Using API Version  1
	I1014 14:39:09.710883   47690 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:39:09.711351   47690 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:39:09.711550   47690 main.go:141] libmachine: (test-preload-675136) Calling .DriverName
	I1014 14:39:09.713506   47690 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1014 14:39:09.715011   47690 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 14:39:09.715310   47690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:39:09.715351   47690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:39:09.729583   47690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36963
	I1014 14:39:09.729976   47690 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:39:09.730398   47690 main.go:141] libmachine: Using API Version  1
	I1014 14:39:09.730423   47690 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:39:09.730728   47690 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:39:09.730909   47690 main.go:141] libmachine: (test-preload-675136) Calling .DriverName
	I1014 14:39:09.764312   47690 out.go:177] * Using the kvm2 driver based on existing profile
	I1014 14:39:09.765632   47690 start.go:297] selected driver: kvm2
	I1014 14:39:09.765646   47690 start.go:901] validating driver "kvm2" against &{Name:test-preload-675136 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-675136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 14:39:09.765741   47690 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 14:39:09.766496   47690 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 14:39:09.766557   47690 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19790-7836/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 14:39:09.780426   47690 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1014 14:39:09.780741   47690 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 14:39:09.780774   47690 cni.go:84] Creating CNI manager for ""
	I1014 14:39:09.780813   47690 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 14:39:09.780866   47690 start.go:340] cluster config:
	{Name:test-preload-675136 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-675136 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 14:39:09.780955   47690 iso.go:125] acquiring lock: {Name:mk2e2e780b05ead4007a93e6b56d28c6081926bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 14:39:09.783990   47690 out.go:177] * Starting "test-preload-675136" primary control-plane node in "test-preload-675136" cluster
	I1014 14:39:09.785128   47690 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1014 14:39:09.809757   47690 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1014 14:39:09.809786   47690 cache.go:56] Caching tarball of preloaded images
	I1014 14:39:09.809904   47690 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1014 14:39:09.811777   47690 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I1014 14:39:09.812825   47690 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1014 14:39:09.847761   47690 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1014 14:39:13.385719   47690 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1014 14:39:13.385820   47690 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1014 14:39:14.222434   47690 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I1014 14:39:14.222560   47690 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/test-preload-675136/config.json ...
	I1014 14:39:14.222795   47690 start.go:360] acquireMachinesLock for test-preload-675136: {Name:mk0b928e3a7c606d1470b2064477a22c5e59969d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 14:39:14.222856   47690 start.go:364] duration metric: took 39.1µs to acquireMachinesLock for "test-preload-675136"
	I1014 14:39:14.222871   47690 start.go:96] Skipping create...Using existing machine configuration
	I1014 14:39:14.222876   47690 fix.go:54] fixHost starting: 
	I1014 14:39:14.223120   47690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:39:14.223152   47690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:39:14.237575   47690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43099
	I1014 14:39:14.238046   47690 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:39:14.238519   47690 main.go:141] libmachine: Using API Version  1
	I1014 14:39:14.238540   47690 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:39:14.238903   47690 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:39:14.239162   47690 main.go:141] libmachine: (test-preload-675136) Calling .DriverName
	I1014 14:39:14.239310   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetState
	I1014 14:39:14.241018   47690 fix.go:112] recreateIfNeeded on test-preload-675136: state=Stopped err=<nil>
	I1014 14:39:14.241046   47690 main.go:141] libmachine: (test-preload-675136) Calling .DriverName
	W1014 14:39:14.241204   47690 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 14:39:14.243493   47690 out.go:177] * Restarting existing kvm2 VM for "test-preload-675136" ...
	I1014 14:39:14.244915   47690 main.go:141] libmachine: (test-preload-675136) Calling .Start
	I1014 14:39:14.245111   47690 main.go:141] libmachine: (test-preload-675136) Ensuring networks are active...
	I1014 14:39:14.245894   47690 main.go:141] libmachine: (test-preload-675136) Ensuring network default is active
	I1014 14:39:14.246235   47690 main.go:141] libmachine: (test-preload-675136) Ensuring network mk-test-preload-675136 is active
	I1014 14:39:14.246625   47690 main.go:141] libmachine: (test-preload-675136) Getting domain xml...
	I1014 14:39:14.247436   47690 main.go:141] libmachine: (test-preload-675136) Creating domain...
	I1014 14:39:15.449662   47690 main.go:141] libmachine: (test-preload-675136) Waiting to get IP...
	I1014 14:39:15.450426   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:15.450751   47690 main.go:141] libmachine: (test-preload-675136) DBG | unable to find current IP address of domain test-preload-675136 in network mk-test-preload-675136
	I1014 14:39:15.450828   47690 main.go:141] libmachine: (test-preload-675136) DBG | I1014 14:39:15.450753   47741 retry.go:31] will retry after 214.909386ms: waiting for machine to come up
	I1014 14:39:15.667221   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:15.667677   47690 main.go:141] libmachine: (test-preload-675136) DBG | unable to find current IP address of domain test-preload-675136 in network mk-test-preload-675136
	I1014 14:39:15.667698   47690 main.go:141] libmachine: (test-preload-675136) DBG | I1014 14:39:15.667643   47741 retry.go:31] will retry after 376.761618ms: waiting for machine to come up
	I1014 14:39:16.046288   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:16.046643   47690 main.go:141] libmachine: (test-preload-675136) DBG | unable to find current IP address of domain test-preload-675136 in network mk-test-preload-675136
	I1014 14:39:16.046665   47690 main.go:141] libmachine: (test-preload-675136) DBG | I1014 14:39:16.046606   47741 retry.go:31] will retry after 485.283893ms: waiting for machine to come up
	I1014 14:39:16.533098   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:16.533582   47690 main.go:141] libmachine: (test-preload-675136) DBG | unable to find current IP address of domain test-preload-675136 in network mk-test-preload-675136
	I1014 14:39:16.533606   47690 main.go:141] libmachine: (test-preload-675136) DBG | I1014 14:39:16.533537   47741 retry.go:31] will retry after 370.231916ms: waiting for machine to come up
	I1014 14:39:16.905144   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:16.905721   47690 main.go:141] libmachine: (test-preload-675136) DBG | unable to find current IP address of domain test-preload-675136 in network mk-test-preload-675136
	I1014 14:39:16.905744   47690 main.go:141] libmachine: (test-preload-675136) DBG | I1014 14:39:16.905670   47741 retry.go:31] will retry after 653.741176ms: waiting for machine to come up
	I1014 14:39:17.560514   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:17.561107   47690 main.go:141] libmachine: (test-preload-675136) DBG | unable to find current IP address of domain test-preload-675136 in network mk-test-preload-675136
	I1014 14:39:17.561126   47690 main.go:141] libmachine: (test-preload-675136) DBG | I1014 14:39:17.561061   47741 retry.go:31] will retry after 643.339185ms: waiting for machine to come up
	I1014 14:39:18.205816   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:18.206162   47690 main.go:141] libmachine: (test-preload-675136) DBG | unable to find current IP address of domain test-preload-675136 in network mk-test-preload-675136
	I1014 14:39:18.206179   47690 main.go:141] libmachine: (test-preload-675136) DBG | I1014 14:39:18.206132   47741 retry.go:31] will retry after 1.061899447s: waiting for machine to come up
	I1014 14:39:19.269688   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:19.270042   47690 main.go:141] libmachine: (test-preload-675136) DBG | unable to find current IP address of domain test-preload-675136 in network mk-test-preload-675136
	I1014 14:39:19.270073   47690 main.go:141] libmachine: (test-preload-675136) DBG | I1014 14:39:19.269989   47741 retry.go:31] will retry after 1.481064682s: waiting for machine to come up
	I1014 14:39:20.753765   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:20.754180   47690 main.go:141] libmachine: (test-preload-675136) DBG | unable to find current IP address of domain test-preload-675136 in network mk-test-preload-675136
	I1014 14:39:20.754209   47690 main.go:141] libmachine: (test-preload-675136) DBG | I1014 14:39:20.754128   47741 retry.go:31] will retry after 1.275490425s: waiting for machine to come up
	I1014 14:39:22.030991   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:22.031488   47690 main.go:141] libmachine: (test-preload-675136) DBG | unable to find current IP address of domain test-preload-675136 in network mk-test-preload-675136
	I1014 14:39:22.031512   47690 main.go:141] libmachine: (test-preload-675136) DBG | I1014 14:39:22.031436   47741 retry.go:31] will retry after 1.910172322s: waiting for machine to come up
	I1014 14:39:23.943653   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:23.944105   47690 main.go:141] libmachine: (test-preload-675136) DBG | unable to find current IP address of domain test-preload-675136 in network mk-test-preload-675136
	I1014 14:39:23.944160   47690 main.go:141] libmachine: (test-preload-675136) DBG | I1014 14:39:23.944057   47741 retry.go:31] will retry after 1.767163162s: waiting for machine to come up
	I1014 14:39:25.712300   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:25.712686   47690 main.go:141] libmachine: (test-preload-675136) DBG | unable to find current IP address of domain test-preload-675136 in network mk-test-preload-675136
	I1014 14:39:25.712714   47690 main.go:141] libmachine: (test-preload-675136) DBG | I1014 14:39:25.712648   47741 retry.go:31] will retry after 2.650135321s: waiting for machine to come up
	I1014 14:39:28.366362   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:28.366851   47690 main.go:141] libmachine: (test-preload-675136) DBG | unable to find current IP address of domain test-preload-675136 in network mk-test-preload-675136
	I1014 14:39:28.366884   47690 main.go:141] libmachine: (test-preload-675136) DBG | I1014 14:39:28.366795   47741 retry.go:31] will retry after 4.147624087s: waiting for machine to come up
	I1014 14:39:32.518290   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:32.518819   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has current primary IP address 192.168.39.100 and MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:32.518846   47690 main.go:141] libmachine: (test-preload-675136) Found IP for machine: 192.168.39.100
	I1014 14:39:32.518860   47690 main.go:141] libmachine: (test-preload-675136) Reserving static IP address...
	I1014 14:39:32.519280   47690 main.go:141] libmachine: (test-preload-675136) DBG | found host DHCP lease matching {name: "test-preload-675136", mac: "52:54:00:73:93:84", ip: "192.168.39.100"} in network mk-test-preload-675136: {Iface:virbr1 ExpiryTime:2024-10-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:73:93:84 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:test-preload-675136 Clientid:01:52:54:00:73:93:84}
	I1014 14:39:32.519309   47690 main.go:141] libmachine: (test-preload-675136) Reserved static IP address: 192.168.39.100
	I1014 14:39:32.519330   47690 main.go:141] libmachine: (test-preload-675136) DBG | skip adding static IP to network mk-test-preload-675136 - found existing host DHCP lease matching {name: "test-preload-675136", mac: "52:54:00:73:93:84", ip: "192.168.39.100"}
	I1014 14:39:32.519348   47690 main.go:141] libmachine: (test-preload-675136) DBG | Getting to WaitForSSH function...
	I1014 14:39:32.519362   47690 main.go:141] libmachine: (test-preload-675136) Waiting for SSH to be available...
	I1014 14:39:32.521548   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:32.521971   47690 main.go:141] libmachine: (test-preload-675136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:93:84", ip: ""} in network mk-test-preload-675136: {Iface:virbr1 ExpiryTime:2024-10-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:73:93:84 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:test-preload-675136 Clientid:01:52:54:00:73:93:84}
	I1014 14:39:32.521998   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined IP address 192.168.39.100 and MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:32.522182   47690 main.go:141] libmachine: (test-preload-675136) DBG | Using SSH client type: external
	I1014 14:39:32.522224   47690 main.go:141] libmachine: (test-preload-675136) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/test-preload-675136/id_rsa (-rw-------)
	I1014 14:39:32.522266   47690 main.go:141] libmachine: (test-preload-675136) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/test-preload-675136/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 14:39:32.522285   47690 main.go:141] libmachine: (test-preload-675136) DBG | About to run SSH command:
	I1014 14:39:32.522309   47690 main.go:141] libmachine: (test-preload-675136) DBG | exit 0
	I1014 14:39:32.646662   47690 main.go:141] libmachine: (test-preload-675136) DBG | SSH cmd err, output: <nil>: 
	I1014 14:39:32.647005   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetConfigRaw
	I1014 14:39:32.647614   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetIP
	I1014 14:39:32.650004   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:32.650352   47690 main.go:141] libmachine: (test-preload-675136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:93:84", ip: ""} in network mk-test-preload-675136: {Iface:virbr1 ExpiryTime:2024-10-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:73:93:84 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:test-preload-675136 Clientid:01:52:54:00:73:93:84}
	I1014 14:39:32.650376   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined IP address 192.168.39.100 and MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:32.650630   47690 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/test-preload-675136/config.json ...
	I1014 14:39:32.650799   47690 machine.go:93] provisionDockerMachine start ...
	I1014 14:39:32.650814   47690 main.go:141] libmachine: (test-preload-675136) Calling .DriverName
	I1014 14:39:32.651002   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHHostname
	I1014 14:39:32.653194   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:32.653546   47690 main.go:141] libmachine: (test-preload-675136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:93:84", ip: ""} in network mk-test-preload-675136: {Iface:virbr1 ExpiryTime:2024-10-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:73:93:84 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:test-preload-675136 Clientid:01:52:54:00:73:93:84}
	I1014 14:39:32.653590   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined IP address 192.168.39.100 and MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:32.653666   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHPort
	I1014 14:39:32.653811   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHKeyPath
	I1014 14:39:32.653936   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHKeyPath
	I1014 14:39:32.654028   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHUsername
	I1014 14:39:32.654163   47690 main.go:141] libmachine: Using SSH client type: native
	I1014 14:39:32.654426   47690 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1014 14:39:32.654441   47690 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 14:39:32.758967   47690 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 14:39:32.759002   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetMachineName
	I1014 14:39:32.759234   47690 buildroot.go:166] provisioning hostname "test-preload-675136"
	I1014 14:39:32.759264   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetMachineName
	I1014 14:39:32.759428   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHHostname
	I1014 14:39:32.762158   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:32.762570   47690 main.go:141] libmachine: (test-preload-675136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:93:84", ip: ""} in network mk-test-preload-675136: {Iface:virbr1 ExpiryTime:2024-10-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:73:93:84 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:test-preload-675136 Clientid:01:52:54:00:73:93:84}
	I1014 14:39:32.762619   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined IP address 192.168.39.100 and MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:32.762740   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHPort
	I1014 14:39:32.762915   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHKeyPath
	I1014 14:39:32.763027   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHKeyPath
	I1014 14:39:32.763163   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHUsername
	I1014 14:39:32.763294   47690 main.go:141] libmachine: Using SSH client type: native
	I1014 14:39:32.763469   47690 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1014 14:39:32.763481   47690 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-675136 && echo "test-preload-675136" | sudo tee /etc/hostname
	I1014 14:39:32.880549   47690 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-675136
	
	I1014 14:39:32.880581   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHHostname
	I1014 14:39:32.883442   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:32.883751   47690 main.go:141] libmachine: (test-preload-675136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:93:84", ip: ""} in network mk-test-preload-675136: {Iface:virbr1 ExpiryTime:2024-10-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:73:93:84 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:test-preload-675136 Clientid:01:52:54:00:73:93:84}
	I1014 14:39:32.883799   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined IP address 192.168.39.100 and MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:32.883951   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHPort
	I1014 14:39:32.884131   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHKeyPath
	I1014 14:39:32.884280   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHKeyPath
	I1014 14:39:32.884385   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHUsername
	I1014 14:39:32.884518   47690 main.go:141] libmachine: Using SSH client type: native
	I1014 14:39:32.884694   47690 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1014 14:39:32.884717   47690 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-675136' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-675136/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-675136' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 14:39:32.995595   47690 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 14:39:32.995624   47690 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 14:39:32.995646   47690 buildroot.go:174] setting up certificates
	I1014 14:39:32.995657   47690 provision.go:84] configureAuth start
	I1014 14:39:32.995669   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetMachineName
	I1014 14:39:32.995934   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetIP
	I1014 14:39:32.998448   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:32.998766   47690 main.go:141] libmachine: (test-preload-675136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:93:84", ip: ""} in network mk-test-preload-675136: {Iface:virbr1 ExpiryTime:2024-10-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:73:93:84 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:test-preload-675136 Clientid:01:52:54:00:73:93:84}
	I1014 14:39:32.998810   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined IP address 192.168.39.100 and MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:32.998910   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHHostname
	I1014 14:39:33.000690   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:33.000969   47690 main.go:141] libmachine: (test-preload-675136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:93:84", ip: ""} in network mk-test-preload-675136: {Iface:virbr1 ExpiryTime:2024-10-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:73:93:84 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:test-preload-675136 Clientid:01:52:54:00:73:93:84}
	I1014 14:39:33.001005   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined IP address 192.168.39.100 and MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:33.001128   47690 provision.go:143] copyHostCerts
	I1014 14:39:33.001171   47690 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 14:39:33.001187   47690 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 14:39:33.001249   47690 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 14:39:33.001340   47690 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 14:39:33.001347   47690 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 14:39:33.001380   47690 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 14:39:33.001492   47690 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 14:39:33.001501   47690 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 14:39:33.001527   47690 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 14:39:33.001587   47690 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.test-preload-675136 san=[127.0.0.1 192.168.39.100 localhost minikube test-preload-675136]
	I1014 14:39:33.051262   47690 provision.go:177] copyRemoteCerts
	I1014 14:39:33.051323   47690 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 14:39:33.051350   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHHostname
	I1014 14:39:33.053838   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:33.054116   47690 main.go:141] libmachine: (test-preload-675136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:93:84", ip: ""} in network mk-test-preload-675136: {Iface:virbr1 ExpiryTime:2024-10-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:73:93:84 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:test-preload-675136 Clientid:01:52:54:00:73:93:84}
	I1014 14:39:33.054149   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined IP address 192.168.39.100 and MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:33.054324   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHPort
	I1014 14:39:33.054496   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHKeyPath
	I1014 14:39:33.054670   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHUsername
	I1014 14:39:33.054777   47690 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/test-preload-675136/id_rsa Username:docker}
	I1014 14:39:33.137300   47690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 14:39:33.160833   47690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1014 14:39:33.184127   47690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 14:39:33.207658   47690 provision.go:87] duration metric: took 211.988664ms to configureAuth
	I1014 14:39:33.207684   47690 buildroot.go:189] setting minikube options for container-runtime
	I1014 14:39:33.207885   47690 config.go:182] Loaded profile config "test-preload-675136": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1014 14:39:33.207968   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHHostname
	I1014 14:39:33.210656   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:33.210982   47690 main.go:141] libmachine: (test-preload-675136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:93:84", ip: ""} in network mk-test-preload-675136: {Iface:virbr1 ExpiryTime:2024-10-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:73:93:84 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:test-preload-675136 Clientid:01:52:54:00:73:93:84}
	I1014 14:39:33.211020   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined IP address 192.168.39.100 and MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:33.211190   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHPort
	I1014 14:39:33.211347   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHKeyPath
	I1014 14:39:33.211499   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHKeyPath
	I1014 14:39:33.211598   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHUsername
	I1014 14:39:33.211714   47690 main.go:141] libmachine: Using SSH client type: native
	I1014 14:39:33.211877   47690 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1014 14:39:33.211897   47690 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 14:39:33.457982   47690 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 14:39:33.458010   47690 machine.go:96] duration metric: took 807.200734ms to provisionDockerMachine
	I1014 14:39:33.458020   47690 start.go:293] postStartSetup for "test-preload-675136" (driver="kvm2")
	I1014 14:39:33.458029   47690 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 14:39:33.458043   47690 main.go:141] libmachine: (test-preload-675136) Calling .DriverName
	I1014 14:39:33.458299   47690 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 14:39:33.458327   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHHostname
	I1014 14:39:33.460876   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:33.461254   47690 main.go:141] libmachine: (test-preload-675136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:93:84", ip: ""} in network mk-test-preload-675136: {Iface:virbr1 ExpiryTime:2024-10-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:73:93:84 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:test-preload-675136 Clientid:01:52:54:00:73:93:84}
	I1014 14:39:33.461288   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined IP address 192.168.39.100 and MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:33.461374   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHPort
	I1014 14:39:33.461563   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHKeyPath
	I1014 14:39:33.461691   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHUsername
	I1014 14:39:33.461816   47690 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/test-preload-675136/id_rsa Username:docker}
	I1014 14:39:33.545244   47690 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 14:39:33.549479   47690 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 14:39:33.549500   47690 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 14:39:33.549553   47690 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 14:39:33.549637   47690 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 14:39:33.549729   47690 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 14:39:33.558577   47690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 14:39:33.582329   47690 start.go:296] duration metric: took 124.296942ms for postStartSetup
	I1014 14:39:33.582376   47690 fix.go:56] duration metric: took 19.359500052s for fixHost
	I1014 14:39:33.582395   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHHostname
	I1014 14:39:33.584972   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:33.585316   47690 main.go:141] libmachine: (test-preload-675136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:93:84", ip: ""} in network mk-test-preload-675136: {Iface:virbr1 ExpiryTime:2024-10-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:73:93:84 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:test-preload-675136 Clientid:01:52:54:00:73:93:84}
	I1014 14:39:33.585360   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined IP address 192.168.39.100 and MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:33.585540   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHPort
	I1014 14:39:33.585716   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHKeyPath
	I1014 14:39:33.585844   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHKeyPath
	I1014 14:39:33.585998   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHUsername
	I1014 14:39:33.586159   47690 main.go:141] libmachine: Using SSH client type: native
	I1014 14:39:33.586335   47690 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1014 14:39:33.586348   47690 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 14:39:33.691724   47690 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728916773.667118507
	
	I1014 14:39:33.691754   47690 fix.go:216] guest clock: 1728916773.667118507
	I1014 14:39:33.691762   47690 fix.go:229] Guest: 2024-10-14 14:39:33.667118507 +0000 UTC Remote: 2024-10-14 14:39:33.58237981 +0000 UTC m=+23.938431999 (delta=84.738697ms)
	I1014 14:39:33.691784   47690 fix.go:200] guest clock delta is within tolerance: 84.738697ms
	I1014 14:39:33.691791   47690 start.go:83] releasing machines lock for "test-preload-675136", held for 19.468925042s
	I1014 14:39:33.691813   47690 main.go:141] libmachine: (test-preload-675136) Calling .DriverName
	I1014 14:39:33.692038   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetIP
	I1014 14:39:33.694495   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:33.694971   47690 main.go:141] libmachine: (test-preload-675136) Calling .DriverName
	I1014 14:39:33.695023   47690 main.go:141] libmachine: (test-preload-675136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:93:84", ip: ""} in network mk-test-preload-675136: {Iface:virbr1 ExpiryTime:2024-10-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:73:93:84 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:test-preload-675136 Clientid:01:52:54:00:73:93:84}
	I1014 14:39:33.695099   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined IP address 192.168.39.100 and MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:33.695427   47690 main.go:141] libmachine: (test-preload-675136) Calling .DriverName
	I1014 14:39:33.695595   47690 main.go:141] libmachine: (test-preload-675136) Calling .DriverName
	I1014 14:39:33.695674   47690 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 14:39:33.695727   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHHostname
	I1014 14:39:33.695768   47690 ssh_runner.go:195] Run: cat /version.json
	I1014 14:39:33.695794   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHHostname
	I1014 14:39:33.698119   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:33.698434   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:33.698465   47690 main.go:141] libmachine: (test-preload-675136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:93:84", ip: ""} in network mk-test-preload-675136: {Iface:virbr1 ExpiryTime:2024-10-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:73:93:84 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:test-preload-675136 Clientid:01:52:54:00:73:93:84}
	I1014 14:39:33.698486   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined IP address 192.168.39.100 and MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:33.698608   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHPort
	I1014 14:39:33.698775   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHKeyPath
	I1014 14:39:33.698831   47690 main.go:141] libmachine: (test-preload-675136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:93:84", ip: ""} in network mk-test-preload-675136: {Iface:virbr1 ExpiryTime:2024-10-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:73:93:84 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:test-preload-675136 Clientid:01:52:54:00:73:93:84}
	I1014 14:39:33.698879   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined IP address 192.168.39.100 and MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:33.698940   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHUsername
	I1014 14:39:33.699026   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHPort
	I1014 14:39:33.699113   47690 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/test-preload-675136/id_rsa Username:docker}
	I1014 14:39:33.699187   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHKeyPath
	I1014 14:39:33.699302   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHUsername
	I1014 14:39:33.699401   47690 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/test-preload-675136/id_rsa Username:docker}
	I1014 14:39:33.775855   47690 ssh_runner.go:195] Run: systemctl --version
	I1014 14:39:33.802487   47690 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 14:39:33.942585   47690 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 14:39:33.949529   47690 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 14:39:33.949584   47690 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 14:39:33.965377   47690 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 14:39:33.965399   47690 start.go:495] detecting cgroup driver to use...
	I1014 14:39:33.965481   47690 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 14:39:33.982032   47690 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 14:39:33.996102   47690 docker.go:217] disabling cri-docker service (if available) ...
	I1014 14:39:33.996168   47690 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 14:39:34.010163   47690 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 14:39:34.024387   47690 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 14:39:34.134727   47690 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 14:39:34.276004   47690 docker.go:233] disabling docker service ...
	I1014 14:39:34.276075   47690 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 14:39:34.290270   47690 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 14:39:34.303640   47690 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 14:39:34.446250   47690 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 14:39:34.562389   47690 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 14:39:34.576473   47690 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 14:39:34.594280   47690 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I1014 14:39:34.594343   47690 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:39:34.604634   47690 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 14:39:34.604698   47690 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:39:34.614888   47690 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:39:34.625247   47690 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:39:34.636107   47690 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 14:39:34.646799   47690 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:39:34.657521   47690 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:39:34.674864   47690 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:39:34.685953   47690 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 14:39:34.696079   47690 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 14:39:34.696143   47690 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 14:39:34.709728   47690 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 14:39:34.720357   47690 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 14:39:34.835073   47690 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 14:39:34.927999   47690 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 14:39:34.928078   47690 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 14:39:34.933018   47690 start.go:563] Will wait 60s for crictl version
	I1014 14:39:34.933069   47690 ssh_runner.go:195] Run: which crictl
	I1014 14:39:34.936769   47690 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 14:39:34.976419   47690 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 14:39:34.976502   47690 ssh_runner.go:195] Run: crio --version
	I1014 14:39:35.005749   47690 ssh_runner.go:195] Run: crio --version
	I1014 14:39:35.035127   47690 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I1014 14:39:35.036460   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetIP
	I1014 14:39:35.038896   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:35.039176   47690 main.go:141] libmachine: (test-preload-675136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:93:84", ip: ""} in network mk-test-preload-675136: {Iface:virbr1 ExpiryTime:2024-10-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:73:93:84 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:test-preload-675136 Clientid:01:52:54:00:73:93:84}
	I1014 14:39:35.039200   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined IP address 192.168.39.100 and MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:39:35.039420   47690 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1014 14:39:35.043413   47690 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 14:39:35.056033   47690 kubeadm.go:883] updating cluster {Name:test-preload-675136 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-675136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 14:39:35.056164   47690 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1014 14:39:35.056222   47690 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 14:39:35.091596   47690 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1014 14:39:35.091666   47690 ssh_runner.go:195] Run: which lz4
	I1014 14:39:35.095701   47690 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 14:39:35.099879   47690 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 14:39:35.099912   47690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I1014 14:39:36.636781   47690 crio.go:462] duration metric: took 1.541108688s to copy over tarball
	I1014 14:39:36.636871   47690 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 14:39:39.059779   47690 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.422882299s)
	I1014 14:39:39.059802   47690 crio.go:469] duration metric: took 2.422987529s to extract the tarball
	I1014 14:39:39.059809   47690 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 14:39:39.101269   47690 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 14:39:39.144857   47690 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1014 14:39:39.144885   47690 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1014 14:39:39.144963   47690 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 14:39:39.144969   47690 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1014 14:39:39.145023   47690 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1014 14:39:39.145094   47690 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1014 14:39:39.145102   47690 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1014 14:39:39.145028   47690 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1014 14:39:39.145042   47690 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1014 14:39:39.145049   47690 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1014 14:39:39.146465   47690 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1014 14:39:39.146509   47690 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1014 14:39:39.146467   47690 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1014 14:39:39.146469   47690 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1014 14:39:39.146477   47690 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 14:39:39.146465   47690 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1014 14:39:39.146482   47690 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1014 14:39:39.146494   47690 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1014 14:39:39.314651   47690 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1014 14:39:39.316633   47690 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I1014 14:39:39.316700   47690 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I1014 14:39:39.323998   47690 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I1014 14:39:39.342002   47690 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1014 14:39:39.343823   47690 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1014 14:39:39.356070   47690 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I1014 14:39:39.395873   47690 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I1014 14:39:39.395919   47690 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1014 14:39:39.395965   47690 ssh_runner.go:195] Run: which crictl
	I1014 14:39:39.480025   47690 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I1014 14:39:39.480067   47690 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I1014 14:39:39.480122   47690 ssh_runner.go:195] Run: which crictl
	I1014 14:39:39.480160   47690 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I1014 14:39:39.480196   47690 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I1014 14:39:39.480255   47690 ssh_runner.go:195] Run: which crictl
	I1014 14:39:39.495274   47690 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I1014 14:39:39.495314   47690 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1014 14:39:39.495394   47690 ssh_runner.go:195] Run: which crictl
	I1014 14:39:39.500089   47690 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I1014 14:39:39.500123   47690 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1014 14:39:39.500147   47690 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I1014 14:39:39.500132   47690 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I1014 14:39:39.500181   47690 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I1014 14:39:39.500195   47690 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I1014 14:39:39.500219   47690 ssh_runner.go:195] Run: which crictl
	I1014 14:39:39.500225   47690 ssh_runner.go:195] Run: which crictl
	I1014 14:39:39.500227   47690 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1014 14:39:39.500233   47690 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1014 14:39:39.500171   47690 ssh_runner.go:195] Run: which crictl
	I1014 14:39:39.500278   47690 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1014 14:39:39.501958   47690 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1014 14:39:39.520013   47690 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1014 14:39:39.599466   47690 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1014 14:39:39.599488   47690 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1014 14:39:39.599528   47690 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1014 14:39:39.599585   47690 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1014 14:39:39.599596   47690 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1014 14:39:39.603442   47690 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1014 14:39:39.638400   47690 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1014 14:39:39.759745   47690 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1014 14:39:39.759774   47690 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1014 14:39:39.759745   47690 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1014 14:39:39.759819   47690 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1014 14:39:39.759879   47690 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1014 14:39:39.759945   47690 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1014 14:39:39.772424   47690 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1014 14:39:39.915945   47690 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I1014 14:39:39.915945   47690 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1014 14:39:39.916018   47690 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1014 14:39:39.916028   47690 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1014 14:39:39.916043   47690 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I1014 14:39:39.916111   47690 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1014 14:39:39.917873   47690 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1014 14:39:39.917935   47690 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I1014 14:39:39.917949   47690 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1014 14:39:39.917979   47690 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I1014 14:39:39.918017   47690 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1014 14:39:39.918045   47690 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I1014 14:39:39.960956   47690 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I1014 14:39:39.961001   47690 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I1014 14:39:39.961017   47690 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1014 14:39:39.961063   47690 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1014 14:39:39.961085   47690 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1014 14:39:39.986664   47690 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I1014 14:39:39.986715   47690 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I1014 14:39:39.986747   47690 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I1014 14:39:39.986799   47690 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I1014 14:39:39.986826   47690 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I1014 14:39:39.986849   47690 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1014 14:39:39.986860   47690 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I1014 14:39:40.042070   47690 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 14:39:43.832627   47690 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4: (3.871535497s)
	I1014 14:39:43.832660   47690 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I1014 14:39:43.832683   47690 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1014 14:39:43.832734   47690 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1014 14:39:43.832762   47690 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: (3.845891603s)
	I1014 14:39:43.832796   47690 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I1014 14:39:43.832841   47690 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.790737115s)
	I1014 14:39:44.581875   47690 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I1014 14:39:44.581922   47690 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1014 14:39:44.581974   47690 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1014 14:39:45.329365   47690 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I1014 14:39:45.329420   47690 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1014 14:39:45.329479   47690 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I1014 14:39:45.678194   47690 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1014 14:39:45.678239   47690 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I1014 14:39:45.678290   47690 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I1014 14:39:46.525535   47690 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I1014 14:39:46.525581   47690 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1014 14:39:46.525629   47690 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I1014 14:39:48.676048   47690 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.150395445s)
	I1014 14:39:48.676082   47690 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1014 14:39:48.676108   47690 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I1014 14:39:48.676189   47690 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I1014 14:39:48.819810   47690 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I1014 14:39:48.819851   47690 cache_images.go:123] Successfully loaded all cached images
	I1014 14:39:48.819858   47690 cache_images.go:92] duration metric: took 9.674959288s to LoadCachedImages
	I1014 14:39:48.819873   47690 kubeadm.go:934] updating node { 192.168.39.100 8443 v1.24.4 crio true true} ...
	I1014 14:39:48.819984   47690 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-675136 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-675136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 14:39:48.820065   47690 ssh_runner.go:195] Run: crio config
	I1014 14:39:48.873805   47690 cni.go:84] Creating CNI manager for ""
	I1014 14:39:48.873828   47690 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 14:39:48.873838   47690 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 14:39:48.873863   47690 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.100 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-675136 NodeName:test-preload-675136 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 14:39:48.873996   47690 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-675136"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 14:39:48.874068   47690 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I1014 14:39:48.884135   47690 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 14:39:48.884199   47690 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 14:39:48.893681   47690 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1014 14:39:48.911626   47690 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 14:39:48.928225   47690 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I1014 14:39:48.945581   47690 ssh_runner.go:195] Run: grep 192.168.39.100	control-plane.minikube.internal$ /etc/hosts
	I1014 14:39:48.949403   47690 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.100	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 14:39:48.962028   47690 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 14:39:49.091347   47690 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 14:39:49.108245   47690 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/test-preload-675136 for IP: 192.168.39.100
	I1014 14:39:49.108269   47690 certs.go:194] generating shared ca certs ...
	I1014 14:39:49.108284   47690 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 14:39:49.108503   47690 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 14:39:49.108564   47690 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 14:39:49.108575   47690 certs.go:256] generating profile certs ...
	I1014 14:39:49.108679   47690 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/test-preload-675136/client.key
	I1014 14:39:49.108763   47690 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/test-preload-675136/apiserver.key.c6a51ed7
	I1014 14:39:49.108827   47690 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/test-preload-675136/proxy-client.key
	I1014 14:39:49.108982   47690 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 14:39:49.109021   47690 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 14:39:49.109035   47690 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 14:39:49.109067   47690 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 14:39:49.109102   47690 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 14:39:49.109135   47690 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 14:39:49.109191   47690 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 14:39:49.110086   47690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 14:39:49.150777   47690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 14:39:49.194951   47690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 14:39:49.232578   47690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 14:39:49.271208   47690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/test-preload-675136/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1014 14:39:49.314250   47690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/test-preload-675136/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 14:39:49.339028   47690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/test-preload-675136/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 14:39:49.362785   47690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/test-preload-675136/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 14:39:49.385251   47690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 14:39:49.408044   47690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 14:39:49.431278   47690 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 14:39:49.454440   47690 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 14:39:49.471087   47690 ssh_runner.go:195] Run: openssl version
	I1014 14:39:49.476969   47690 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 14:39:49.487700   47690 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 14:39:49.492015   47690 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 14:39:49.492052   47690 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 14:39:49.497779   47690 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 14:39:49.508287   47690 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 14:39:49.519089   47690 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 14:39:49.523446   47690 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 14:39:49.523500   47690 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 14:39:49.529127   47690 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 14:39:49.540299   47690 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 14:39:49.551111   47690 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 14:39:49.555533   47690 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 14:39:49.555574   47690 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 14:39:49.561001   47690 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 14:39:49.571843   47690 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 14:39:49.576506   47690 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 14:39:49.582315   47690 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 14:39:49.588057   47690 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 14:39:49.593952   47690 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 14:39:49.599660   47690 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 14:39:49.605214   47690 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 14:39:49.610881   47690 kubeadm.go:392] StartCluster: {Name:test-preload-675136 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-675136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 14:39:49.610962   47690 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 14:39:49.611010   47690 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 14:39:49.648999   47690 cri.go:89] found id: ""
	I1014 14:39:49.649057   47690 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 14:39:49.659775   47690 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1014 14:39:49.659798   47690 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1014 14:39:49.659845   47690 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 14:39:49.670253   47690 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 14:39:49.670707   47690 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-675136" does not appear in /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 14:39:49.670826   47690 kubeconfig.go:62] /home/jenkins/minikube-integration/19790-7836/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-675136" cluster setting kubeconfig missing "test-preload-675136" context setting]
	I1014 14:39:49.671100   47690 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/kubeconfig: {Name:mk17dd47b52fd7a8ee17f563c35b22ef1b7788f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 14:39:49.671755   47690 kapi.go:59] client config for test-preload-675136: &rest.Config{Host:"https://192.168.39.100:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/profiles/test-preload-675136/client.crt", KeyFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/profiles/test-preload-675136/client.key", CAFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432aa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 14:39:49.672327   47690 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 14:39:49.682362   47690 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.100
	I1014 14:39:49.682388   47690 kubeadm.go:1160] stopping kube-system containers ...
	I1014 14:39:49.682400   47690 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1014 14:39:49.682436   47690 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 14:39:49.718778   47690 cri.go:89] found id: ""
	I1014 14:39:49.718849   47690 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1014 14:39:49.735077   47690 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 14:39:49.745047   47690 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 14:39:49.745072   47690 kubeadm.go:157] found existing configuration files:
	
	I1014 14:39:49.745118   47690 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 14:39:49.754374   47690 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 14:39:49.754444   47690 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 14:39:49.764150   47690 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 14:39:49.772991   47690 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 14:39:49.773044   47690 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 14:39:49.782439   47690 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 14:39:49.791348   47690 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 14:39:49.791392   47690 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 14:39:49.800503   47690 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 14:39:49.809140   47690 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 14:39:49.809181   47690 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 14:39:49.818244   47690 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 14:39:49.827927   47690 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 14:39:49.927874   47690 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 14:39:50.530855   47690 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1014 14:39:50.797244   47690 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 14:39:50.859481   47690 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1014 14:39:50.938455   47690 api_server.go:52] waiting for apiserver process to appear ...
	I1014 14:39:50.938548   47690 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 14:39:51.439564   47690 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 14:39:51.939641   47690 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 14:39:51.975380   47690 api_server.go:72] duration metric: took 1.036921935s to wait for apiserver process to appear ...
	I1014 14:39:51.975407   47690 api_server.go:88] waiting for apiserver healthz status ...
	I1014 14:39:51.975431   47690 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I1014 14:39:51.975911   47690 api_server.go:269] stopped: https://192.168.39.100:8443/healthz: Get "https://192.168.39.100:8443/healthz": dial tcp 192.168.39.100:8443: connect: connection refused
	I1014 14:39:52.476510   47690 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I1014 14:39:52.477122   47690 api_server.go:269] stopped: https://192.168.39.100:8443/healthz: Get "https://192.168.39.100:8443/healthz": dial tcp 192.168.39.100:8443: connect: connection refused
	I1014 14:39:52.975662   47690 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I1014 14:39:56.040080   47690 api_server.go:279] https://192.168.39.100:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 14:39:56.040111   47690 api_server.go:103] status: https://192.168.39.100:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 14:39:56.040128   47690 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I1014 14:39:56.112278   47690 api_server.go:279] https://192.168.39.100:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 14:39:56.112312   47690 api_server.go:103] status: https://192.168.39.100:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 14:39:56.475886   47690 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I1014 14:39:56.480881   47690 api_server.go:279] https://192.168.39.100:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 14:39:56.480904   47690 api_server.go:103] status: https://192.168.39.100:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 14:39:56.975480   47690 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I1014 14:39:56.982007   47690 api_server.go:279] https://192.168.39.100:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 14:39:56.982031   47690 api_server.go:103] status: https://192.168.39.100:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 14:39:57.475558   47690 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I1014 14:39:57.482253   47690 api_server.go:279] https://192.168.39.100:8443/healthz returned 200:
	ok
	I1014 14:39:57.489683   47690 api_server.go:141] control plane version: v1.24.4
	I1014 14:39:57.489716   47690 api_server.go:131] duration metric: took 5.514300852s to wait for apiserver health ...
	I1014 14:39:57.489727   47690 cni.go:84] Creating CNI manager for ""
	I1014 14:39:57.489735   47690 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 14:39:57.491641   47690 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 14:39:57.492971   47690 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 14:39:57.504858   47690 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 14:39:57.523952   47690 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 14:39:57.524024   47690 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 14:39:57.524044   47690 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 14:39:57.533125   47690 system_pods.go:59] 8 kube-system pods found
	I1014 14:39:57.533168   47690 system_pods.go:61] "coredns-6d4b75cb6d-8crmn" [d44c1806-af72-44fe-8966-cd92dddb3816] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 14:39:57.533179   47690 system_pods.go:61] "coredns-6d4b75cb6d-pnf6c" [3b3eeba1-5611-4d9e-8c48-ef948b3a1929] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 14:39:57.533187   47690 system_pods.go:61] "etcd-test-preload-675136" [40f927ac-291f-452f-a3e3-39ba5e21de23] Running
	I1014 14:39:57.533195   47690 system_pods.go:61] "kube-apiserver-test-preload-675136" [367b14ab-e1ab-402c-84b4-c887969d1907] Running
	I1014 14:39:57.533202   47690 system_pods.go:61] "kube-controller-manager-test-preload-675136" [efe5f400-22c7-4360-ac38-4ceb0c1f106e] Running
	I1014 14:39:57.533209   47690 system_pods.go:61] "kube-proxy-rmldh" [1caacbac-d4d4-4816-8104-f7299bf72cc3] Running
	I1014 14:39:57.533224   47690 system_pods.go:61] "kube-scheduler-test-preload-675136" [36a27819-551d-4aae-a9a0-7dc8eca587c4] Running
	I1014 14:39:57.533236   47690 system_pods.go:61] "storage-provisioner" [59d507e2-c053-4dc4-b2d2-452bddab86de] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 14:39:57.533245   47690 system_pods.go:74] duration metric: took 9.271727ms to wait for pod list to return data ...
	I1014 14:39:57.533258   47690 node_conditions.go:102] verifying NodePressure condition ...
	I1014 14:39:57.536425   47690 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 14:39:57.536449   47690 node_conditions.go:123] node cpu capacity is 2
	I1014 14:39:57.536461   47690 node_conditions.go:105] duration metric: took 3.19612ms to run NodePressure ...
	I1014 14:39:57.536486   47690 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 14:39:57.788044   47690 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1014 14:39:57.794559   47690 retry.go:31] will retry after 216.390239ms: kubelet not initialised
	I1014 14:39:58.017401   47690 retry.go:31] will retry after 537.159864ms: kubelet not initialised
	I1014 14:39:58.559259   47690 retry.go:31] will retry after 665.265094ms: kubelet not initialised
	I1014 14:39:59.230394   47690 retry.go:31] will retry after 974.978246ms: kubelet not initialised
	I1014 14:40:00.210546   47690 retry.go:31] will retry after 692.29554ms: kubelet not initialised
	I1014 14:40:00.908623   47690 retry.go:31] will retry after 1.479374364s: kubelet not initialised
	I1014 14:40:02.394314   47690 kubeadm.go:739] kubelet initialised
	I1014 14:40:02.394337   47690 kubeadm.go:740] duration metric: took 4.606270153s waiting for restarted kubelet to initialise ...
	I1014 14:40:02.394346   47690 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 14:40:02.399768   47690 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-8crmn" in "kube-system" namespace to be "Ready" ...
	I1014 14:40:02.405139   47690 pod_ready.go:98] node "test-preload-675136" hosting pod "coredns-6d4b75cb6d-8crmn" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-675136" has status "Ready":"False"
	I1014 14:40:02.405172   47690 pod_ready.go:82] duration metric: took 5.379792ms for pod "coredns-6d4b75cb6d-8crmn" in "kube-system" namespace to be "Ready" ...
	E1014 14:40:02.405182   47690 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-675136" hosting pod "coredns-6d4b75cb6d-8crmn" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-675136" has status "Ready":"False"
	I1014 14:40:02.405188   47690 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-675136" in "kube-system" namespace to be "Ready" ...
	I1014 14:40:02.409701   47690 pod_ready.go:98] node "test-preload-675136" hosting pod "etcd-test-preload-675136" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-675136" has status "Ready":"False"
	I1014 14:40:02.409729   47690 pod_ready.go:82] duration metric: took 4.531856ms for pod "etcd-test-preload-675136" in "kube-system" namespace to be "Ready" ...
	E1014 14:40:02.409741   47690 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-675136" hosting pod "etcd-test-preload-675136" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-675136" has status "Ready":"False"
	I1014 14:40:02.409751   47690 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-675136" in "kube-system" namespace to be "Ready" ...
	I1014 14:40:02.414338   47690 pod_ready.go:98] node "test-preload-675136" hosting pod "kube-apiserver-test-preload-675136" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-675136" has status "Ready":"False"
	I1014 14:40:02.414363   47690 pod_ready.go:82] duration metric: took 4.602099ms for pod "kube-apiserver-test-preload-675136" in "kube-system" namespace to be "Ready" ...
	E1014 14:40:02.414373   47690 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-675136" hosting pod "kube-apiserver-test-preload-675136" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-675136" has status "Ready":"False"
	I1014 14:40:02.414382   47690 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-675136" in "kube-system" namespace to be "Ready" ...
	I1014 14:40:02.418831   47690 pod_ready.go:98] node "test-preload-675136" hosting pod "kube-controller-manager-test-preload-675136" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-675136" has status "Ready":"False"
	I1014 14:40:02.418860   47690 pod_ready.go:82] duration metric: took 4.466034ms for pod "kube-controller-manager-test-preload-675136" in "kube-system" namespace to be "Ready" ...
	E1014 14:40:02.418871   47690 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-675136" hosting pod "kube-controller-manager-test-preload-675136" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-675136" has status "Ready":"False"
	I1014 14:40:02.418880   47690 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rmldh" in "kube-system" namespace to be "Ready" ...
	I1014 14:40:02.793885   47690 pod_ready.go:98] node "test-preload-675136" hosting pod "kube-proxy-rmldh" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-675136" has status "Ready":"False"
	I1014 14:40:02.793914   47690 pod_ready.go:82] duration metric: took 375.025858ms for pod "kube-proxy-rmldh" in "kube-system" namespace to be "Ready" ...
	E1014 14:40:02.793927   47690 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-675136" hosting pod "kube-proxy-rmldh" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-675136" has status "Ready":"False"
	I1014 14:40:02.793934   47690 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-675136" in "kube-system" namespace to be "Ready" ...
	I1014 14:40:03.192926   47690 pod_ready.go:98] node "test-preload-675136" hosting pod "kube-scheduler-test-preload-675136" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-675136" has status "Ready":"False"
	I1014 14:40:03.192955   47690 pod_ready.go:82] duration metric: took 399.014218ms for pod "kube-scheduler-test-preload-675136" in "kube-system" namespace to be "Ready" ...
	E1014 14:40:03.192967   47690 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-675136" hosting pod "kube-scheduler-test-preload-675136" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-675136" has status "Ready":"False"
	I1014 14:40:03.192976   47690 pod_ready.go:39] duration metric: took 798.615734ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 14:40:03.192997   47690 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 14:40:03.205483   47690 ops.go:34] apiserver oom_adj: -16
	I1014 14:40:03.205512   47690 kubeadm.go:597] duration metric: took 13.545707242s to restartPrimaryControlPlane
	I1014 14:40:03.205524   47690 kubeadm.go:394] duration metric: took 13.594652341s to StartCluster
	I1014 14:40:03.205541   47690 settings.go:142] acquiring lock: {Name:mk9f6f6b9dc8c3435472077cbb9091b8e648d1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 14:40:03.205605   47690 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 14:40:03.206230   47690 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/kubeconfig: {Name:mk17dd47b52fd7a8ee17f563c35b22ef1b7788f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 14:40:03.206455   47690 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 14:40:03.206524   47690 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 14:40:03.206636   47690 addons.go:69] Setting storage-provisioner=true in profile "test-preload-675136"
	I1014 14:40:03.206655   47690 addons.go:234] Setting addon storage-provisioner=true in "test-preload-675136"
	W1014 14:40:03.206672   47690 addons.go:243] addon storage-provisioner should already be in state true
	I1014 14:40:03.206704   47690 host.go:66] Checking if "test-preload-675136" exists ...
	I1014 14:40:03.206705   47690 config.go:182] Loaded profile config "test-preload-675136": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1014 14:40:03.206653   47690 addons.go:69] Setting default-storageclass=true in profile "test-preload-675136"
	I1014 14:40:03.206757   47690 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-675136"
	I1014 14:40:03.207000   47690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:40:03.207033   47690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:40:03.207110   47690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:40:03.207149   47690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:40:03.208392   47690 out.go:177] * Verifying Kubernetes components...
	I1014 14:40:03.209702   47690 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 14:40:03.222064   47690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40237
	I1014 14:40:03.222382   47690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32963
	I1014 14:40:03.222623   47690 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:40:03.222960   47690 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:40:03.223186   47690 main.go:141] libmachine: Using API Version  1
	I1014 14:40:03.223209   47690 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:40:03.223485   47690 main.go:141] libmachine: Using API Version  1
	I1014 14:40:03.223504   47690 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:40:03.223571   47690 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:40:03.223735   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetState
	I1014 14:40:03.223823   47690 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:40:03.224371   47690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:40:03.224404   47690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:40:03.226094   47690 kapi.go:59] client config for test-preload-675136: &rest.Config{Host:"https://192.168.39.100:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/profiles/test-preload-675136/client.crt", KeyFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/profiles/test-preload-675136/client.key", CAFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432aa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 14:40:03.226370   47690 addons.go:234] Setting addon default-storageclass=true in "test-preload-675136"
	W1014 14:40:03.226388   47690 addons.go:243] addon default-storageclass should already be in state true
	I1014 14:40:03.226412   47690 host.go:66] Checking if "test-preload-675136" exists ...
	I1014 14:40:03.226742   47690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:40:03.226781   47690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:40:03.239029   47690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46133
	I1014 14:40:03.239370   47690 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:40:03.239825   47690 main.go:141] libmachine: Using API Version  1
	I1014 14:40:03.239839   47690 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:40:03.240186   47690 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:40:03.240392   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetState
	I1014 14:40:03.241807   47690 main.go:141] libmachine: (test-preload-675136) Calling .DriverName
	I1014 14:40:03.241857   47690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35311
	I1014 14:40:03.242292   47690 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:40:03.242736   47690 main.go:141] libmachine: Using API Version  1
	I1014 14:40:03.242759   47690 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:40:03.243148   47690 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:40:03.243616   47690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:40:03.243660   47690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:40:03.243962   47690 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 14:40:03.245214   47690 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 14:40:03.245229   47690 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 14:40:03.245243   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHHostname
	I1014 14:40:03.248319   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:40:03.248700   47690 main.go:141] libmachine: (test-preload-675136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:93:84", ip: ""} in network mk-test-preload-675136: {Iface:virbr1 ExpiryTime:2024-10-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:73:93:84 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:test-preload-675136 Clientid:01:52:54:00:73:93:84}
	I1014 14:40:03.248725   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined IP address 192.168.39.100 and MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:40:03.248928   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHPort
	I1014 14:40:03.249118   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHKeyPath
	I1014 14:40:03.249303   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHUsername
	I1014 14:40:03.249431   47690 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/test-preload-675136/id_rsa Username:docker}
	I1014 14:40:03.277910   47690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41655
	I1014 14:40:03.278401   47690 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:40:03.278863   47690 main.go:141] libmachine: Using API Version  1
	I1014 14:40:03.278888   47690 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:40:03.279210   47690 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:40:03.279383   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetState
	I1014 14:40:03.280913   47690 main.go:141] libmachine: (test-preload-675136) Calling .DriverName
	I1014 14:40:03.281116   47690 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 14:40:03.281134   47690 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 14:40:03.281187   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHHostname
	I1014 14:40:03.284430   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:40:03.284810   47690 main.go:141] libmachine: (test-preload-675136) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:93:84", ip: ""} in network mk-test-preload-675136: {Iface:virbr1 ExpiryTime:2024-10-14 15:39:25 +0000 UTC Type:0 Mac:52:54:00:73:93:84 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:test-preload-675136 Clientid:01:52:54:00:73:93:84}
	I1014 14:40:03.284836   47690 main.go:141] libmachine: (test-preload-675136) DBG | domain test-preload-675136 has defined IP address 192.168.39.100 and MAC address 52:54:00:73:93:84 in network mk-test-preload-675136
	I1014 14:40:03.284978   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHPort
	I1014 14:40:03.285144   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHKeyPath
	I1014 14:40:03.285321   47690 main.go:141] libmachine: (test-preload-675136) Calling .GetSSHUsername
	I1014 14:40:03.285452   47690 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/test-preload-675136/id_rsa Username:docker}
	I1014 14:40:03.386795   47690 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 14:40:03.407789   47690 node_ready.go:35] waiting up to 6m0s for node "test-preload-675136" to be "Ready" ...
	I1014 14:40:03.471548   47690 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 14:40:03.520432   47690 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 14:40:04.519753   47690 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.048168011s)
	I1014 14:40:04.519808   47690 main.go:141] libmachine: Making call to close driver server
	I1014 14:40:04.519813   47690 main.go:141] libmachine: Making call to close driver server
	I1014 14:40:04.519823   47690 main.go:141] libmachine: (test-preload-675136) Calling .Close
	I1014 14:40:04.519827   47690 main.go:141] libmachine: (test-preload-675136) Calling .Close
	I1014 14:40:04.520110   47690 main.go:141] libmachine: Successfully made call to close driver server
	I1014 14:40:04.520128   47690 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 14:40:04.520132   47690 main.go:141] libmachine: (test-preload-675136) DBG | Closing plugin on server side
	I1014 14:40:04.520136   47690 main.go:141] libmachine: Successfully made call to close driver server
	I1014 14:40:04.520111   47690 main.go:141] libmachine: (test-preload-675136) DBG | Closing plugin on server side
	I1014 14:40:04.520150   47690 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 14:40:04.520138   47690 main.go:141] libmachine: Making call to close driver server
	I1014 14:40:04.520160   47690 main.go:141] libmachine: Making call to close driver server
	I1014 14:40:04.520175   47690 main.go:141] libmachine: (test-preload-675136) Calling .Close
	I1014 14:40:04.520162   47690 main.go:141] libmachine: (test-preload-675136) Calling .Close
	I1014 14:40:04.520393   47690 main.go:141] libmachine: Successfully made call to close driver server
	I1014 14:40:04.520444   47690 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 14:40:04.520418   47690 main.go:141] libmachine: (test-preload-675136) DBG | Closing plugin on server side
	I1014 14:40:04.520475   47690 main.go:141] libmachine: (test-preload-675136) DBG | Closing plugin on server side
	I1014 14:40:04.520503   47690 main.go:141] libmachine: Successfully made call to close driver server
	I1014 14:40:04.520513   47690 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 14:40:04.526786   47690 main.go:141] libmachine: Making call to close driver server
	I1014 14:40:04.526805   47690 main.go:141] libmachine: (test-preload-675136) Calling .Close
	I1014 14:40:04.527040   47690 main.go:141] libmachine: (test-preload-675136) DBG | Closing plugin on server side
	I1014 14:40:04.527084   47690 main.go:141] libmachine: Successfully made call to close driver server
	I1014 14:40:04.527097   47690 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 14:40:04.529273   47690 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1014 14:40:04.530716   47690 addons.go:510] duration metric: took 1.324197543s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1014 14:40:05.410407   47690 node_ready.go:53] node "test-preload-675136" has status "Ready":"False"
	I1014 14:40:06.912136   47690 node_ready.go:49] node "test-preload-675136" has status "Ready":"True"
	I1014 14:40:06.912165   47690 node_ready.go:38] duration metric: took 3.504342104s for node "test-preload-675136" to be "Ready" ...
	I1014 14:40:06.912177   47690 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 14:40:06.916961   47690 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-8crmn" in "kube-system" namespace to be "Ready" ...
	I1014 14:40:06.922418   47690 pod_ready.go:93] pod "coredns-6d4b75cb6d-8crmn" in "kube-system" namespace has status "Ready":"True"
	I1014 14:40:06.922441   47690 pod_ready.go:82] duration metric: took 5.457572ms for pod "coredns-6d4b75cb6d-8crmn" in "kube-system" namespace to be "Ready" ...
	I1014 14:40:06.922452   47690 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-675136" in "kube-system" namespace to be "Ready" ...
	I1014 14:40:06.927639   47690 pod_ready.go:93] pod "etcd-test-preload-675136" in "kube-system" namespace has status "Ready":"True"
	I1014 14:40:06.927655   47690 pod_ready.go:82] duration metric: took 5.196448ms for pod "etcd-test-preload-675136" in "kube-system" namespace to be "Ready" ...
	I1014 14:40:06.927662   47690 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-675136" in "kube-system" namespace to be "Ready" ...
	I1014 14:40:07.933929   47690 pod_ready.go:93] pod "kube-apiserver-test-preload-675136" in "kube-system" namespace has status "Ready":"True"
	I1014 14:40:07.933962   47690 pod_ready.go:82] duration metric: took 1.006292048s for pod "kube-apiserver-test-preload-675136" in "kube-system" namespace to be "Ready" ...
	I1014 14:40:07.933975   47690 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-675136" in "kube-system" namespace to be "Ready" ...
	I1014 14:40:09.940177   47690 pod_ready.go:103] pod "kube-controller-manager-test-preload-675136" in "kube-system" namespace has status "Ready":"False"
	I1014 14:40:10.939905   47690 pod_ready.go:93] pod "kube-controller-manager-test-preload-675136" in "kube-system" namespace has status "Ready":"True"
	I1014 14:40:10.939931   47690 pod_ready.go:82] duration metric: took 3.005948103s for pod "kube-controller-manager-test-preload-675136" in "kube-system" namespace to be "Ready" ...
	I1014 14:40:10.939944   47690 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rmldh" in "kube-system" namespace to be "Ready" ...
	I1014 14:40:10.944456   47690 pod_ready.go:93] pod "kube-proxy-rmldh" in "kube-system" namespace has status "Ready":"True"
	I1014 14:40:10.944478   47690 pod_ready.go:82] duration metric: took 4.526027ms for pod "kube-proxy-rmldh" in "kube-system" namespace to be "Ready" ...
	I1014 14:40:10.944488   47690 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-675136" in "kube-system" namespace to be "Ready" ...
	I1014 14:40:10.949103   47690 pod_ready.go:93] pod "kube-scheduler-test-preload-675136" in "kube-system" namespace has status "Ready":"True"
	I1014 14:40:10.949119   47690 pod_ready.go:82] duration metric: took 4.624557ms for pod "kube-scheduler-test-preload-675136" in "kube-system" namespace to be "Ready" ...
	I1014 14:40:10.949128   47690 pod_ready.go:39] duration metric: took 4.036940169s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 14:40:10.949140   47690 api_server.go:52] waiting for apiserver process to appear ...
	I1014 14:40:10.949186   47690 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 14:40:10.964605   47690 api_server.go:72] duration metric: took 7.758123078s to wait for apiserver process to appear ...
	I1014 14:40:10.964624   47690 api_server.go:88] waiting for apiserver healthz status ...
	I1014 14:40:10.964637   47690 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I1014 14:40:10.971498   47690 api_server.go:279] https://192.168.39.100:8443/healthz returned 200:
	ok
	I1014 14:40:10.972520   47690 api_server.go:141] control plane version: v1.24.4
	I1014 14:40:10.972547   47690 api_server.go:131] duration metric: took 7.911516ms to wait for apiserver health ...
	I1014 14:40:10.972555   47690 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 14:40:11.114284   47690 system_pods.go:59] 7 kube-system pods found
	I1014 14:40:11.114316   47690 system_pods.go:61] "coredns-6d4b75cb6d-8crmn" [d44c1806-af72-44fe-8966-cd92dddb3816] Running
	I1014 14:40:11.114321   47690 system_pods.go:61] "etcd-test-preload-675136" [40f927ac-291f-452f-a3e3-39ba5e21de23] Running
	I1014 14:40:11.114327   47690 system_pods.go:61] "kube-apiserver-test-preload-675136" [367b14ab-e1ab-402c-84b4-c887969d1907] Running
	I1014 14:40:11.114337   47690 system_pods.go:61] "kube-controller-manager-test-preload-675136" [efe5f400-22c7-4360-ac38-4ceb0c1f106e] Running
	I1014 14:40:11.114342   47690 system_pods.go:61] "kube-proxy-rmldh" [1caacbac-d4d4-4816-8104-f7299bf72cc3] Running
	I1014 14:40:11.114347   47690 system_pods.go:61] "kube-scheduler-test-preload-675136" [36a27819-551d-4aae-a9a0-7dc8eca587c4] Running
	I1014 14:40:11.114352   47690 system_pods.go:61] "storage-provisioner" [59d507e2-c053-4dc4-b2d2-452bddab86de] Running
	I1014 14:40:11.114358   47690 system_pods.go:74] duration metric: took 141.797355ms to wait for pod list to return data ...
	I1014 14:40:11.114367   47690 default_sa.go:34] waiting for default service account to be created ...
	I1014 14:40:11.311717   47690 default_sa.go:45] found service account: "default"
	I1014 14:40:11.311746   47690 default_sa.go:55] duration metric: took 197.373281ms for default service account to be created ...
	I1014 14:40:11.311755   47690 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 14:40:11.514259   47690 system_pods.go:86] 7 kube-system pods found
	I1014 14:40:11.514288   47690 system_pods.go:89] "coredns-6d4b75cb6d-8crmn" [d44c1806-af72-44fe-8966-cd92dddb3816] Running
	I1014 14:40:11.514293   47690 system_pods.go:89] "etcd-test-preload-675136" [40f927ac-291f-452f-a3e3-39ba5e21de23] Running
	I1014 14:40:11.514297   47690 system_pods.go:89] "kube-apiserver-test-preload-675136" [367b14ab-e1ab-402c-84b4-c887969d1907] Running
	I1014 14:40:11.514300   47690 system_pods.go:89] "kube-controller-manager-test-preload-675136" [efe5f400-22c7-4360-ac38-4ceb0c1f106e] Running
	I1014 14:40:11.514304   47690 system_pods.go:89] "kube-proxy-rmldh" [1caacbac-d4d4-4816-8104-f7299bf72cc3] Running
	I1014 14:40:11.514307   47690 system_pods.go:89] "kube-scheduler-test-preload-675136" [36a27819-551d-4aae-a9a0-7dc8eca587c4] Running
	I1014 14:40:11.514310   47690 system_pods.go:89] "storage-provisioner" [59d507e2-c053-4dc4-b2d2-452bddab86de] Running
	I1014 14:40:11.514316   47690 system_pods.go:126] duration metric: took 202.555056ms to wait for k8s-apps to be running ...
	I1014 14:40:11.514323   47690 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 14:40:11.514381   47690 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 14:40:11.529834   47690 system_svc.go:56] duration metric: took 15.498889ms WaitForService to wait for kubelet
	I1014 14:40:11.529870   47690 kubeadm.go:582] duration metric: took 8.32338853s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 14:40:11.529888   47690 node_conditions.go:102] verifying NodePressure condition ...
	I1014 14:40:11.711268   47690 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 14:40:11.711299   47690 node_conditions.go:123] node cpu capacity is 2
	I1014 14:40:11.711312   47690 node_conditions.go:105] duration metric: took 181.418703ms to run NodePressure ...
	I1014 14:40:11.711328   47690 start.go:241] waiting for startup goroutines ...
	I1014 14:40:11.711339   47690 start.go:246] waiting for cluster config update ...
	I1014 14:40:11.711353   47690 start.go:255] writing updated cluster config ...
	I1014 14:40:11.711715   47690 ssh_runner.go:195] Run: rm -f paused
	I1014 14:40:11.756838   47690 start.go:600] kubectl: 1.31.1, cluster: 1.24.4 (minor skew: 7)
	I1014 14:40:11.759127   47690 out.go:201] 
	W1014 14:40:11.760833   47690 out.go:270] ! /usr/local/bin/kubectl is version 1.31.1, which may have incompatibilities with Kubernetes 1.24.4.
	I1014 14:40:11.762168   47690 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I1014 14:40:11.763521   47690 out.go:177] * Done! kubectl is now configured to use "test-preload-675136" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 14 14:40:12 test-preload-675136 crio[686]: time="2024-10-14 14:40:12.648239149Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728916812648211935,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cbd7672f-6d92-4a0a-ab97-787e2ba6c126 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 14:40:12 test-preload-675136 crio[686]: time="2024-10-14 14:40:12.648857421Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1b86f213-22fb-4234-b493-8c214ef4a40c name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:40:12 test-preload-675136 crio[686]: time="2024-10-14 14:40:12.648912271Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1b86f213-22fb-4234-b493-8c214ef4a40c name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:40:12 test-preload-675136 crio[686]: time="2024-10-14 14:40:12.649087599Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4c96e21b3ea9d06fd6ec117a8d7e377385925aeada1f22c67be41016fca48045,PodSandboxId:004de653d794d8868cce37560e469ee46e402638536f69686c121f4bdb93ed9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1728916805141019173,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-8crmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d44c1806-af72-44fe-8966-cd92dddb3816,},Annotations:map[string]string{io.kubernetes.container.hash: a6ed9bb2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e039bf825f6c005c2838500f8b95bb775cc2ed1d1ffeec6d27006ed161beabe8,PodSandboxId:a82cf57ddad7523a3700a1011be58c9463e36eb2552d44109034335a082df5fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728916797958009062,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 59d507e2-c053-4dc4-b2d2-452bddab86de,},Annotations:map[string]string{io.kubernetes.container.hash: 503b1772,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9938e2af3a6a7c497564bc412bcd1e99ece699626bb06ac2ef774cab75c7a41d,PodSandboxId:d65ea3964470faa389e65be9a9fe95b19a16fb5a88e2816fc43d3fbc2339567e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1728916797694406863,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmldh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c
aacbac-d4d4-4816-8104-f7299bf72cc3,},Annotations:map[string]string{io.kubernetes.container.hash: 7e07348c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2f0077210548252858874df3d0b7da547b50d9c59b534dfa8c685bdec29f511,PodSandboxId:6a4d6e506ff875fa440ba949507b12e4ca67d9e7019f8722961b84880477b554,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1728916791723619099,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-675136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cf96192c
1f66b45c4869501be0f254f,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b878b6daea8b0f1113eb999973ae7ab3e562d3e2442e2aa86346697d9977020,PodSandboxId:cb0a3588a6fb52237723bdb888afbd90e73e710a2825d623df25c99f342be4ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1728916791714519421,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-675136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b47c2cf21149eb4198a2f463453e9c2,},Annotations:map
[string]string{io.kubernetes.container.hash: 611ea30b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6a11328461d65d25bb717e47564fe8daa7557b7da7975db63825fe7eaac61b2,PodSandboxId:27d0b92aeea87a0a39f9c93e2c77d942540e233ad9c514231d23ee4c71d30b99,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1728916791654418669,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-675136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6422495f6d35391499c8946b37f73b9c,},Annotations:map[string]str
ing{io.kubernetes.container.hash: cca4ed8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1770dbf4bc25c10454de3af52234b361b8171c553eae2c194f3cb9479b5d180b,PodSandboxId:00888cd0b0a89184cd98a978ebafff35ba475838d78801264e8fc1af728dadac,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1728916791614643180,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-675136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51d3cb48eaf779a41e9819b1b7caf9bc,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1b86f213-22fb-4234-b493-8c214ef4a40c name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:40:12 test-preload-675136 crio[686]: time="2024-10-14 14:40:12.688243035Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3a2ef7f0-e61b-4b11-ac3a-9f5685254a31 name=/runtime.v1.RuntimeService/Version
	Oct 14 14:40:12 test-preload-675136 crio[686]: time="2024-10-14 14:40:12.688320669Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3a2ef7f0-e61b-4b11-ac3a-9f5685254a31 name=/runtime.v1.RuntimeService/Version
	Oct 14 14:40:12 test-preload-675136 crio[686]: time="2024-10-14 14:40:12.689579443Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=756322a0-2c58-47f8-8a19-16eed4cb9e74 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 14:40:12 test-preload-675136 crio[686]: time="2024-10-14 14:40:12.690184783Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728916812690161017,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=756322a0-2c58-47f8-8a19-16eed4cb9e74 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 14:40:12 test-preload-675136 crio[686]: time="2024-10-14 14:40:12.690799785Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c4a0ca3f-d1b1-4357-b2a5-49b2ab0ab0e3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:40:12 test-preload-675136 crio[686]: time="2024-10-14 14:40:12.690897409Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c4a0ca3f-d1b1-4357-b2a5-49b2ab0ab0e3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:40:12 test-preload-675136 crio[686]: time="2024-10-14 14:40:12.691075425Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4c96e21b3ea9d06fd6ec117a8d7e377385925aeada1f22c67be41016fca48045,PodSandboxId:004de653d794d8868cce37560e469ee46e402638536f69686c121f4bdb93ed9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1728916805141019173,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-8crmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d44c1806-af72-44fe-8966-cd92dddb3816,},Annotations:map[string]string{io.kubernetes.container.hash: a6ed9bb2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e039bf825f6c005c2838500f8b95bb775cc2ed1d1ffeec6d27006ed161beabe8,PodSandboxId:a82cf57ddad7523a3700a1011be58c9463e36eb2552d44109034335a082df5fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728916797958009062,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 59d507e2-c053-4dc4-b2d2-452bddab86de,},Annotations:map[string]string{io.kubernetes.container.hash: 503b1772,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9938e2af3a6a7c497564bc412bcd1e99ece699626bb06ac2ef774cab75c7a41d,PodSandboxId:d65ea3964470faa389e65be9a9fe95b19a16fb5a88e2816fc43d3fbc2339567e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1728916797694406863,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmldh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c
aacbac-d4d4-4816-8104-f7299bf72cc3,},Annotations:map[string]string{io.kubernetes.container.hash: 7e07348c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2f0077210548252858874df3d0b7da547b50d9c59b534dfa8c685bdec29f511,PodSandboxId:6a4d6e506ff875fa440ba949507b12e4ca67d9e7019f8722961b84880477b554,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1728916791723619099,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-675136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cf96192c
1f66b45c4869501be0f254f,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b878b6daea8b0f1113eb999973ae7ab3e562d3e2442e2aa86346697d9977020,PodSandboxId:cb0a3588a6fb52237723bdb888afbd90e73e710a2825d623df25c99f342be4ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1728916791714519421,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-675136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b47c2cf21149eb4198a2f463453e9c2,},Annotations:map
[string]string{io.kubernetes.container.hash: 611ea30b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6a11328461d65d25bb717e47564fe8daa7557b7da7975db63825fe7eaac61b2,PodSandboxId:27d0b92aeea87a0a39f9c93e2c77d942540e233ad9c514231d23ee4c71d30b99,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1728916791654418669,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-675136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6422495f6d35391499c8946b37f73b9c,},Annotations:map[string]str
ing{io.kubernetes.container.hash: cca4ed8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1770dbf4bc25c10454de3af52234b361b8171c553eae2c194f3cb9479b5d180b,PodSandboxId:00888cd0b0a89184cd98a978ebafff35ba475838d78801264e8fc1af728dadac,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1728916791614643180,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-675136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51d3cb48eaf779a41e9819b1b7caf9bc,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c4a0ca3f-d1b1-4357-b2a5-49b2ab0ab0e3 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:40:12 test-preload-675136 crio[686]: time="2024-10-14 14:40:12.731065612Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6e3c5e8e-f7c0-4b3a-b813-97f84e573965 name=/runtime.v1.RuntimeService/Version
	Oct 14 14:40:12 test-preload-675136 crio[686]: time="2024-10-14 14:40:12.731137320Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6e3c5e8e-f7c0-4b3a-b813-97f84e573965 name=/runtime.v1.RuntimeService/Version
	Oct 14 14:40:12 test-preload-675136 crio[686]: time="2024-10-14 14:40:12.732471883Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d4287d12-7dbe-4255-b16a-dd44e2a6ce64 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 14:40:12 test-preload-675136 crio[686]: time="2024-10-14 14:40:12.732992439Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728916812732968383,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d4287d12-7dbe-4255-b16a-dd44e2a6ce64 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 14:40:12 test-preload-675136 crio[686]: time="2024-10-14 14:40:12.733662028Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=97db7be1-72e5-495b-97db-c576c0130919 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:40:12 test-preload-675136 crio[686]: time="2024-10-14 14:40:12.733726740Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=97db7be1-72e5-495b-97db-c576c0130919 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:40:12 test-preload-675136 crio[686]: time="2024-10-14 14:40:12.733937781Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4c96e21b3ea9d06fd6ec117a8d7e377385925aeada1f22c67be41016fca48045,PodSandboxId:004de653d794d8868cce37560e469ee46e402638536f69686c121f4bdb93ed9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1728916805141019173,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-8crmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d44c1806-af72-44fe-8966-cd92dddb3816,},Annotations:map[string]string{io.kubernetes.container.hash: a6ed9bb2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e039bf825f6c005c2838500f8b95bb775cc2ed1d1ffeec6d27006ed161beabe8,PodSandboxId:a82cf57ddad7523a3700a1011be58c9463e36eb2552d44109034335a082df5fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728916797958009062,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 59d507e2-c053-4dc4-b2d2-452bddab86de,},Annotations:map[string]string{io.kubernetes.container.hash: 503b1772,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9938e2af3a6a7c497564bc412bcd1e99ece699626bb06ac2ef774cab75c7a41d,PodSandboxId:d65ea3964470faa389e65be9a9fe95b19a16fb5a88e2816fc43d3fbc2339567e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1728916797694406863,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmldh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c
aacbac-d4d4-4816-8104-f7299bf72cc3,},Annotations:map[string]string{io.kubernetes.container.hash: 7e07348c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2f0077210548252858874df3d0b7da547b50d9c59b534dfa8c685bdec29f511,PodSandboxId:6a4d6e506ff875fa440ba949507b12e4ca67d9e7019f8722961b84880477b554,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1728916791723619099,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-675136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cf96192c
1f66b45c4869501be0f254f,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b878b6daea8b0f1113eb999973ae7ab3e562d3e2442e2aa86346697d9977020,PodSandboxId:cb0a3588a6fb52237723bdb888afbd90e73e710a2825d623df25c99f342be4ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1728916791714519421,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-675136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b47c2cf21149eb4198a2f463453e9c2,},Annotations:map
[string]string{io.kubernetes.container.hash: 611ea30b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6a11328461d65d25bb717e47564fe8daa7557b7da7975db63825fe7eaac61b2,PodSandboxId:27d0b92aeea87a0a39f9c93e2c77d942540e233ad9c514231d23ee4c71d30b99,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1728916791654418669,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-675136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6422495f6d35391499c8946b37f73b9c,},Annotations:map[string]str
ing{io.kubernetes.container.hash: cca4ed8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1770dbf4bc25c10454de3af52234b361b8171c553eae2c194f3cb9479b5d180b,PodSandboxId:00888cd0b0a89184cd98a978ebafff35ba475838d78801264e8fc1af728dadac,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1728916791614643180,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-675136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51d3cb48eaf779a41e9819b1b7caf9bc,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=97db7be1-72e5-495b-97db-c576c0130919 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:40:12 test-preload-675136 crio[686]: time="2024-10-14 14:40:12.768408367Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=79386a60-5b0d-4152-b32a-af18d5c092b7 name=/runtime.v1.RuntimeService/Version
	Oct 14 14:40:12 test-preload-675136 crio[686]: time="2024-10-14 14:40:12.768496060Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=79386a60-5b0d-4152-b32a-af18d5c092b7 name=/runtime.v1.RuntimeService/Version
	Oct 14 14:40:12 test-preload-675136 crio[686]: time="2024-10-14 14:40:12.770117162Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a5fca2d6-7711-4a84-900f-6b0eca510bf8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 14:40:12 test-preload-675136 crio[686]: time="2024-10-14 14:40:12.770543013Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728916812770521163,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a5fca2d6-7711-4a84-900f-6b0eca510bf8 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 14:40:12 test-preload-675136 crio[686]: time="2024-10-14 14:40:12.771403084Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=00e2f6da-607e-48e7-a0ec-e12a2168093a name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:40:12 test-preload-675136 crio[686]: time="2024-10-14 14:40:12.771480571Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=00e2f6da-607e-48e7-a0ec-e12a2168093a name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:40:12 test-preload-675136 crio[686]: time="2024-10-14 14:40:12.771660060Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4c96e21b3ea9d06fd6ec117a8d7e377385925aeada1f22c67be41016fca48045,PodSandboxId:004de653d794d8868cce37560e469ee46e402638536f69686c121f4bdb93ed9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1728916805141019173,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-8crmn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d44c1806-af72-44fe-8966-cd92dddb3816,},Annotations:map[string]string{io.kubernetes.container.hash: a6ed9bb2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e039bf825f6c005c2838500f8b95bb775cc2ed1d1ffeec6d27006ed161beabe8,PodSandboxId:a82cf57ddad7523a3700a1011be58c9463e36eb2552d44109034335a082df5fd,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728916797958009062,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 59d507e2-c053-4dc4-b2d2-452bddab86de,},Annotations:map[string]string{io.kubernetes.container.hash: 503b1772,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9938e2af3a6a7c497564bc412bcd1e99ece699626bb06ac2ef774cab75c7a41d,PodSandboxId:d65ea3964470faa389e65be9a9fe95b19a16fb5a88e2816fc43d3fbc2339567e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1728916797694406863,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rmldh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c
aacbac-d4d4-4816-8104-f7299bf72cc3,},Annotations:map[string]string{io.kubernetes.container.hash: 7e07348c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b2f0077210548252858874df3d0b7da547b50d9c59b534dfa8c685bdec29f511,PodSandboxId:6a4d6e506ff875fa440ba949507b12e4ca67d9e7019f8722961b84880477b554,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1728916791723619099,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-675136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5cf96192c
1f66b45c4869501be0f254f,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2b878b6daea8b0f1113eb999973ae7ab3e562d3e2442e2aa86346697d9977020,PodSandboxId:cb0a3588a6fb52237723bdb888afbd90e73e710a2825d623df25c99f342be4ae,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1728916791714519421,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-675136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b47c2cf21149eb4198a2f463453e9c2,},Annotations:map
[string]string{io.kubernetes.container.hash: 611ea30b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6a11328461d65d25bb717e47564fe8daa7557b7da7975db63825fe7eaac61b2,PodSandboxId:27d0b92aeea87a0a39f9c93e2c77d942540e233ad9c514231d23ee4c71d30b99,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1728916791654418669,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-675136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6422495f6d35391499c8946b37f73b9c,},Annotations:map[string]str
ing{io.kubernetes.container.hash: cca4ed8b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1770dbf4bc25c10454de3af52234b361b8171c553eae2c194f3cb9479b5d180b,PodSandboxId:00888cd0b0a89184cd98a978ebafff35ba475838d78801264e8fc1af728dadac,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1728916791614643180,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-675136,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51d3cb48eaf779a41e9819b1b7caf9bc,},Annotation
s:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=00e2f6da-607e-48e7-a0ec-e12a2168093a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4c96e21b3ea9d       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   7 seconds ago       Running             coredns                   1                   004de653d794d       coredns-6d4b75cb6d-8crmn
	e039bf825f6c0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       1                   a82cf57ddad75       storage-provisioner
	9938e2af3a6a7       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   15 seconds ago      Running             kube-proxy                1                   d65ea3964470f       kube-proxy-rmldh
	b2f0077210548       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   21 seconds ago      Running             kube-scheduler            1                   6a4d6e506ff87       kube-scheduler-test-preload-675136
	2b878b6daea8b       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   21 seconds ago      Running             etcd                      1                   cb0a3588a6fb5       etcd-test-preload-675136
	c6a11328461d6       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   21 seconds ago      Running             kube-apiserver            1                   27d0b92aeea87       kube-apiserver-test-preload-675136
	1770dbf4bc25c       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   21 seconds ago      Running             kube-controller-manager   1                   00888cd0b0a89       kube-controller-manager-test-preload-675136
	
	
	==> coredns [4c96e21b3ea9d06fd6ec117a8d7e377385925aeada1f22c67be41016fca48045] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:34298 - 29935 "HINFO IN 2653055665193489867.8924560983065116676. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010036332s
	
	
	==> describe nodes <==
	Name:               test-preload-675136
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-675136
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=test-preload-675136
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_14T14_38_42_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 14:38:38 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-675136
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 14:40:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 14:40:06 +0000   Mon, 14 Oct 2024 14:38:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 14:40:06 +0000   Mon, 14 Oct 2024 14:38:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 14:40:06 +0000   Mon, 14 Oct 2024 14:38:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 14:40:06 +0000   Mon, 14 Oct 2024 14:40:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.100
	  Hostname:    test-preload-675136
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 8a54b6bebe3c4d8b824ca97b3e8f1553
	  System UUID:                8a54b6be-be3c-4d8b-824c-a97b3e8f1553
	  Boot ID:                    97c6c663-df27-4900-914a-e9a97bbff2cc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-8crmn                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     78s
	  kube-system                 etcd-test-preload-675136                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         90s
	  kube-system                 kube-apiserver-test-preload-675136             250m (12%)    0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-controller-manager-test-preload-675136    200m (10%)    0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-proxy-rmldh                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 kube-scheduler-test-preload-675136             100m (5%)     0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14s                kube-proxy       
	  Normal  Starting                 77s                kube-proxy       
	  Normal  NodeHasSufficientMemory  98s (x5 over 98s)  kubelet          Node test-preload-675136 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    98s (x5 over 98s)  kubelet          Node test-preload-675136 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     98s (x5 over 98s)  kubelet          Node test-preload-675136 status is now: NodeHasSufficientPID
	  Normal  Starting                 90s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  90s                kubelet          Node test-preload-675136 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    90s                kubelet          Node test-preload-675136 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     90s                kubelet          Node test-preload-675136 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  90s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                80s                kubelet          Node test-preload-675136 status is now: NodeReady
	  Normal  RegisteredNode           79s                node-controller  Node test-preload-675136 event: Registered Node test-preload-675136 in Controller
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21s (x8 over 22s)  kubelet          Node test-preload-675136 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 22s)  kubelet          Node test-preload-675136 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 22s)  kubelet          Node test-preload-675136 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4s                 node-controller  Node test-preload-675136 event: Registered Node test-preload-675136 in Controller
	
	
	==> dmesg <==
	[Oct14 14:39] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050154] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039029] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.856983] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.603523] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.640164] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.359354] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.053682] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058588] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.172361] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.140945] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.273150] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[ +14.239696] systemd-fstab-generator[1008]: Ignoring "noauto" option for root device
	[  +0.067833] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.635482] systemd-fstab-generator[1138]: Ignoring "noauto" option for root device
	[  +5.287287] kauditd_printk_skb: 105 callbacks suppressed
	[Oct14 14:40] systemd-fstab-generator[1780]: Ignoring "noauto" option for root device
	[  +0.110707] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.467259] kauditd_printk_skb: 33 callbacks suppressed
	
	
	==> etcd [2b878b6daea8b0f1113eb999973ae7ab3e562d3e2442e2aa86346697d9977020] <==
	{"level":"info","ts":"2024-10-14T14:39:52.081Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"3276445ff8d31e34","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-10-14T14:39:52.095Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-10-14T14:39:52.107Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 switched to configuration voters=(3636168928135421492)"}
	{"level":"info","ts":"2024-10-14T14:39:52.107Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6cf58294dcaef1c8","local-member-id":"3276445ff8d31e34","added-peer-id":"3276445ff8d31e34","added-peer-peer-urls":["https://192.168.39.100:2380"]}
	{"level":"info","ts":"2024-10-14T14:39:52.107Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6cf58294dcaef1c8","local-member-id":"3276445ff8d31e34","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T14:39:52.107Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T14:39:52.108Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-14T14:39:52.108Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"3276445ff8d31e34","initial-advertise-peer-urls":["https://192.168.39.100:2380"],"listen-peer-urls":["https://192.168.39.100:2380"],"advertise-client-urls":["https://192.168.39.100:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.100:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-14T14:39:52.108Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-14T14:39:52.108Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.100:2380"}
	{"level":"info","ts":"2024-10-14T14:39:52.108Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.100:2380"}
	{"level":"info","ts":"2024-10-14T14:39:53.445Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 is starting a new election at term 2"}
	{"level":"info","ts":"2024-10-14T14:39:53.446Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-14T14:39:53.446Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 received MsgPreVoteResp from 3276445ff8d31e34 at term 2"}
	{"level":"info","ts":"2024-10-14T14:39:53.446Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 became candidate at term 3"}
	{"level":"info","ts":"2024-10-14T14:39:53.446Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 received MsgVoteResp from 3276445ff8d31e34 at term 3"}
	{"level":"info","ts":"2024-10-14T14:39:53.446Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3276445ff8d31e34 became leader at term 3"}
	{"level":"info","ts":"2024-10-14T14:39:53.446Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3276445ff8d31e34 elected leader 3276445ff8d31e34 at term 3"}
	{"level":"info","ts":"2024-10-14T14:39:53.446Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"3276445ff8d31e34","local-member-attributes":"{Name:test-preload-675136 ClientURLs:[https://192.168.39.100:2379]}","request-path":"/0/members/3276445ff8d31e34/attributes","cluster-id":"6cf58294dcaef1c8","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-14T14:39:53.448Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-14T14:39:53.450Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.100:2379"}
	{"level":"info","ts":"2024-10-14T14:39:53.450Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-14T14:39:53.451Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-14T14:39:53.451Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-14T14:39:53.457Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 14:40:13 up 0 min,  0 users,  load average: 0.86, 0.27, 0.09
	Linux test-preload-675136 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [c6a11328461d65d25bb717e47564fe8daa7557b7da7975db63825fe7eaac61b2] <==
	I1014 14:39:56.015864       1 controller.go:85] Starting OpenAPI controller
	I1014 14:39:56.015880       1 controller.go:85] Starting OpenAPI V3 controller
	I1014 14:39:56.015968       1 naming_controller.go:291] Starting NamingConditionController
	I1014 14:39:56.015991       1 establishing_controller.go:76] Starting EstablishingController
	I1014 14:39:56.016365       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1014 14:39:56.016380       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1014 14:39:56.016396       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1014 14:39:56.123205       1 cache.go:39] Caches are synced for autoregister controller
	I1014 14:39:56.123426       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1014 14:39:56.123462       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1014 14:39:56.126981       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1014 14:39:56.128598       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1014 14:39:56.144263       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1014 14:39:56.144419       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1014 14:39:56.173537       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 14:39:56.671786       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1014 14:39:57.004269       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1014 14:39:57.672767       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1014 14:39:57.693936       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1014 14:39:57.737517       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1014 14:39:57.763701       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 14:39:57.771791       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1014 14:39:58.116531       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I1014 14:40:08.572161       1 controller.go:611] quota admission added evaluator for: endpoints
	I1014 14:40:08.799183       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [1770dbf4bc25c10454de3af52234b361b8171c553eae2c194f3cb9479b5d180b] <==
	I1014 14:40:08.574509       1 shared_informer.go:262] Caches are synced for disruption
	I1014 14:40:08.574536       1 disruption.go:371] Sending events to api server.
	I1014 14:40:08.575472       1 shared_informer.go:262] Caches are synced for cronjob
	I1014 14:40:08.576349       1 shared_informer.go:262] Caches are synced for crt configmap
	I1014 14:40:08.578522       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I1014 14:40:08.580845       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I1014 14:40:08.581973       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I1014 14:40:08.582159       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1014 14:40:08.582265       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I1014 14:40:08.584469       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I1014 14:40:08.585647       1 shared_informer.go:262] Caches are synced for ReplicationController
	I1014 14:40:08.628863       1 shared_informer.go:262] Caches are synced for persistent volume
	I1014 14:40:08.640352       1 shared_informer.go:262] Caches are synced for stateful set
	I1014 14:40:08.668157       1 shared_informer.go:262] Caches are synced for PV protection
	I1014 14:40:08.679737       1 shared_informer.go:262] Caches are synced for ephemeral
	I1014 14:40:08.683382       1 shared_informer.go:262] Caches are synced for PVC protection
	I1014 14:40:08.701402       1 shared_informer.go:262] Caches are synced for expand
	I1014 14:40:08.717908       1 shared_informer.go:262] Caches are synced for attach detach
	I1014 14:40:08.754223       1 shared_informer.go:262] Caches are synced for resource quota
	I1014 14:40:08.781572       1 shared_informer.go:262] Caches are synced for resource quota
	I1014 14:40:08.790680       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1014 14:40:08.807993       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I1014 14:40:09.190782       1 shared_informer.go:262] Caches are synced for garbage collector
	I1014 14:40:09.190870       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1014 14:40:09.221610       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [9938e2af3a6a7c497564bc412bcd1e99ece699626bb06ac2ef774cab75c7a41d] <==
	I1014 14:39:58.037015       1 node.go:163] Successfully retrieved node IP: 192.168.39.100
	I1014 14:39:58.037098       1 server_others.go:138] "Detected node IP" address="192.168.39.100"
	I1014 14:39:58.037148       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1014 14:39:58.105598       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1014 14:39:58.105628       1 server_others.go:206] "Using iptables Proxier"
	I1014 14:39:58.105660       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1014 14:39:58.106624       1 server.go:661] "Version info" version="v1.24.4"
	I1014 14:39:58.106680       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 14:39:58.108253       1 config.go:317] "Starting service config controller"
	I1014 14:39:58.108564       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1014 14:39:58.108615       1 config.go:226] "Starting endpoint slice config controller"
	I1014 14:39:58.108635       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1014 14:39:58.110936       1 config.go:444] "Starting node config controller"
	I1014 14:39:58.111055       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1014 14:39:58.209007       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1014 14:39:58.209137       1 shared_informer.go:262] Caches are synced for service config
	I1014 14:39:58.211646       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [b2f0077210548252858874df3d0b7da547b50d9c59b534dfa8c685bdec29f511] <==
	W1014 14:39:56.102514       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1014 14:39:56.102542       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1014 14:39:56.102700       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1014 14:39:56.102769       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1014 14:39:56.102942       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1014 14:39:56.102979       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1014 14:39:56.103057       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1014 14:39:56.103095       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1014 14:39:56.103202       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1014 14:39:56.103232       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1014 14:39:56.103405       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1014 14:39:56.105879       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1014 14:39:56.106027       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1014 14:39:56.106061       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1014 14:39:56.106168       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1014 14:39:56.106234       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1014 14:39:56.106302       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1014 14:39:56.106332       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1014 14:39:56.106388       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1014 14:39:56.106413       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1014 14:39:56.106511       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1014 14:39:56.106544       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1014 14:39:56.111069       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1014 14:39:56.111131       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1014 14:39:57.579420       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 14 14:39:56 test-preload-675136 kubelet[1145]: I1014 14:39:56.994758    1145 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1caacbac-d4d4-4816-8104-f7299bf72cc3-kube-proxy\") pod \"kube-proxy-rmldh\" (UID: \"1caacbac-d4d4-4816-8104-f7299bf72cc3\") " pod="kube-system/kube-proxy-rmldh"
	Oct 14 14:39:56 test-preload-675136 kubelet[1145]: I1014 14:39:56.994783    1145 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d44c1806-af72-44fe-8966-cd92dddb3816-config-volume\") pod \"coredns-6d4b75cb6d-8crmn\" (UID: \"d44c1806-af72-44fe-8966-cd92dddb3816\") " pod="kube-system/coredns-6d4b75cb6d-8crmn"
	Oct 14 14:39:56 test-preload-675136 kubelet[1145]: I1014 14:39:56.994841    1145 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1caacbac-d4d4-4816-8104-f7299bf72cc3-lib-modules\") pod \"kube-proxy-rmldh\" (UID: \"1caacbac-d4d4-4816-8104-f7299bf72cc3\") " pod="kube-system/kube-proxy-rmldh"
	Oct 14 14:39:56 test-preload-675136 kubelet[1145]: I1014 14:39:56.994876    1145 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmgrg\" (UniqueName: \"kubernetes.io/projected/1caacbac-d4d4-4816-8104-f7299bf72cc3-kube-api-access-wmgrg\") pod \"kube-proxy-rmldh\" (UID: \"1caacbac-d4d4-4816-8104-f7299bf72cc3\") " pod="kube-system/kube-proxy-rmldh"
	Oct 14 14:39:56 test-preload-675136 kubelet[1145]: I1014 14:39:56.994903    1145 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/59d507e2-c053-4dc4-b2d2-452bddab86de-tmp\") pod \"storage-provisioner\" (UID: \"59d507e2-c053-4dc4-b2d2-452bddab86de\") " pod="kube-system/storage-provisioner"
	Oct 14 14:39:56 test-preload-675136 kubelet[1145]: I1014 14:39:56.994928    1145 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc6sq\" (UniqueName: \"kubernetes.io/projected/59d507e2-c053-4dc4-b2d2-452bddab86de-kube-api-access-wc6sq\") pod \"storage-provisioner\" (UID: \"59d507e2-c053-4dc4-b2d2-452bddab86de\") " pod="kube-system/storage-provisioner"
	Oct 14 14:39:56 test-preload-675136 kubelet[1145]: I1014 14:39:56.994941    1145 reconciler.go:159] "Reconciler: start to sync state"
	Oct 14 14:39:57 test-preload-675136 kubelet[1145]: I1014 14:39:57.105744    1145 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zw47\" (UniqueName: \"kubernetes.io/projected/3b3eeba1-5611-4d9e-8c48-ef948b3a1929-kube-api-access-2zw47\") pod \"3b3eeba1-5611-4d9e-8c48-ef948b3a1929\" (UID: \"3b3eeba1-5611-4d9e-8c48-ef948b3a1929\") "
	Oct 14 14:39:57 test-preload-675136 kubelet[1145]: I1014 14:39:57.105970    1145 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b3eeba1-5611-4d9e-8c48-ef948b3a1929-config-volume\") pod \"3b3eeba1-5611-4d9e-8c48-ef948b3a1929\" (UID: \"3b3eeba1-5611-4d9e-8c48-ef948b3a1929\") "
	Oct 14 14:39:57 test-preload-675136 kubelet[1145]: W1014 14:39:57.107131    1145 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/3b3eeba1-5611-4d9e-8c48-ef948b3a1929/volumes/kubernetes.io~projected/kube-api-access-2zw47: clearQuota called, but quotas disabled
	Oct 14 14:39:57 test-preload-675136 kubelet[1145]: E1014 14:39:57.107453    1145 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 14 14:39:57 test-preload-675136 kubelet[1145]: E1014 14:39:57.107575    1145 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/d44c1806-af72-44fe-8966-cd92dddb3816-config-volume podName:d44c1806-af72-44fe-8966-cd92dddb3816 nodeName:}" failed. No retries permitted until 2024-10-14 14:39:57.607539287 +0000 UTC m=+6.816853368 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d44c1806-af72-44fe-8966-cd92dddb3816-config-volume") pod "coredns-6d4b75cb6d-8crmn" (UID: "d44c1806-af72-44fe-8966-cd92dddb3816") : object "kube-system"/"coredns" not registered
	Oct 14 14:39:57 test-preload-675136 kubelet[1145]: W1014 14:39:57.107582    1145 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/3b3eeba1-5611-4d9e-8c48-ef948b3a1929/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Oct 14 14:39:57 test-preload-675136 kubelet[1145]: I1014 14:39:57.108163    1145 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b3eeba1-5611-4d9e-8c48-ef948b3a1929-kube-api-access-2zw47" (OuterVolumeSpecName: "kube-api-access-2zw47") pod "3b3eeba1-5611-4d9e-8c48-ef948b3a1929" (UID: "3b3eeba1-5611-4d9e-8c48-ef948b3a1929"). InnerVolumeSpecName "kube-api-access-2zw47". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Oct 14 14:39:57 test-preload-675136 kubelet[1145]: I1014 14:39:57.108415    1145 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b3eeba1-5611-4d9e-8c48-ef948b3a1929-config-volume" (OuterVolumeSpecName: "config-volume") pod "3b3eeba1-5611-4d9e-8c48-ef948b3a1929" (UID: "3b3eeba1-5611-4d9e-8c48-ef948b3a1929"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Oct 14 14:39:57 test-preload-675136 kubelet[1145]: I1014 14:39:57.208149    1145 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b3eeba1-5611-4d9e-8c48-ef948b3a1929-config-volume\") on node \"test-preload-675136\" DevicePath \"\""
	Oct 14 14:39:57 test-preload-675136 kubelet[1145]: I1014 14:39:57.208197    1145 reconciler.go:384] "Volume detached for volume \"kube-api-access-2zw47\" (UniqueName: \"kubernetes.io/projected/3b3eeba1-5611-4d9e-8c48-ef948b3a1929-kube-api-access-2zw47\") on node \"test-preload-675136\" DevicePath \"\""
	Oct 14 14:39:57 test-preload-675136 kubelet[1145]: E1014 14:39:57.611263    1145 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 14 14:39:57 test-preload-675136 kubelet[1145]: E1014 14:39:57.611370    1145 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/d44c1806-af72-44fe-8966-cd92dddb3816-config-volume podName:d44c1806-af72-44fe-8966-cd92dddb3816 nodeName:}" failed. No retries permitted until 2024-10-14 14:39:58.611354662 +0000 UTC m=+7.820668748 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d44c1806-af72-44fe-8966-cd92dddb3816-config-volume") pod "coredns-6d4b75cb6d-8crmn" (UID: "d44c1806-af72-44fe-8966-cd92dddb3816") : object "kube-system"/"coredns" not registered
	Oct 14 14:39:58 test-preload-675136 kubelet[1145]: E1014 14:39:58.619051    1145 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 14 14:39:58 test-preload-675136 kubelet[1145]: E1014 14:39:58.619158    1145 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/d44c1806-af72-44fe-8966-cd92dddb3816-config-volume podName:d44c1806-af72-44fe-8966-cd92dddb3816 nodeName:}" failed. No retries permitted until 2024-10-14 14:40:00.619140802 +0000 UTC m=+9.828454884 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d44c1806-af72-44fe-8966-cd92dddb3816-config-volume") pod "coredns-6d4b75cb6d-8crmn" (UID: "d44c1806-af72-44fe-8966-cd92dddb3816") : object "kube-system"/"coredns" not registered
	Oct 14 14:39:59 test-preload-675136 kubelet[1145]: E1014 14:39:59.021262    1145 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-8crmn" podUID=d44c1806-af72-44fe-8966-cd92dddb3816
	Oct 14 14:39:59 test-preload-675136 kubelet[1145]: I1014 14:39:59.026974    1145 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=3b3eeba1-5611-4d9e-8c48-ef948b3a1929 path="/var/lib/kubelet/pods/3b3eeba1-5611-4d9e-8c48-ef948b3a1929/volumes"
	Oct 14 14:40:00 test-preload-675136 kubelet[1145]: E1014 14:40:00.633379    1145 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Oct 14 14:40:00 test-preload-675136 kubelet[1145]: E1014 14:40:00.633801    1145 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/d44c1806-af72-44fe-8966-cd92dddb3816-config-volume podName:d44c1806-af72-44fe-8966-cd92dddb3816 nodeName:}" failed. No retries permitted until 2024-10-14 14:40:04.633779484 +0000 UTC m=+13.843093565 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d44c1806-af72-44fe-8966-cd92dddb3816-config-volume") pod "coredns-6d4b75cb6d-8crmn" (UID: "d44c1806-af72-44fe-8966-cd92dddb3816") : object "kube-system"/"coredns" not registered
	
	
	==> storage-provisioner [e039bf825f6c005c2838500f8b95bb775cc2ed1d1ffeec6d27006ed161beabe8] <==
	I1014 14:39:58.096479       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-675136 -n test-preload-675136
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-675136 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-675136" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-675136
--- FAIL: TestPreload (165.66s)

                                                
                                    
x
+
TestKubernetesUpgrade (401.17s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-058309 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-058309 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m10.924303989s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-058309] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19790-7836/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-7836/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-058309" primary control-plane node in "kubernetes-upgrade-058309" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 14:43:07.542913   52225 out.go:345] Setting OutFile to fd 1 ...
	I1014 14:43:07.543452   52225 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:43:07.543469   52225 out.go:358] Setting ErrFile to fd 2...
	I1014 14:43:07.543476   52225 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:43:07.543970   52225 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
	I1014 14:43:07.544892   52225 out.go:352] Setting JSON to false
	I1014 14:43:07.546143   52225 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5137,"bootTime":1728911850,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 14:43:07.546252   52225 start.go:139] virtualization: kvm guest
	I1014 14:43:07.548457   52225 out.go:177] * [kubernetes-upgrade-058309] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1014 14:43:07.550228   52225 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 14:43:07.550239   52225 notify.go:220] Checking for updates...
	I1014 14:43:07.552715   52225 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 14:43:07.553996   52225 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 14:43:07.555457   52225 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 14:43:07.556712   52225 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 14:43:07.557807   52225 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 14:43:07.559394   52225 config.go:182] Loaded profile config "NoKubernetes-229138": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 14:43:07.559548   52225 config.go:182] Loaded profile config "force-systemd-env-338682": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 14:43:07.559678   52225 config.go:182] Loaded profile config "running-upgrade-833927": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1014 14:43:07.559799   52225 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 14:43:07.598051   52225 out.go:177] * Using the kvm2 driver based on user configuration
	I1014 14:43:07.599371   52225 start.go:297] selected driver: kvm2
	I1014 14:43:07.599388   52225 start.go:901] validating driver "kvm2" against <nil>
	I1014 14:43:07.599404   52225 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 14:43:07.600412   52225 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 14:43:07.600499   52225 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19790-7836/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 14:43:07.617755   52225 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1014 14:43:07.617813   52225 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 14:43:07.618193   52225 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1014 14:43:07.618245   52225 cni.go:84] Creating CNI manager for ""
	I1014 14:43:07.618311   52225 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 14:43:07.618323   52225 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1014 14:43:07.618395   52225 start.go:340] cluster config:
	{Name:kubernetes-upgrade-058309 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-058309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 14:43:07.618548   52225 iso.go:125] acquiring lock: {Name:mk2e2e780b05ead4007a93e6b56d28c6081926bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 14:43:07.620716   52225 out.go:177] * Starting "kubernetes-upgrade-058309" primary control-plane node in "kubernetes-upgrade-058309" cluster
	I1014 14:43:07.621857   52225 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1014 14:43:07.621891   52225 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1014 14:43:07.621901   52225 cache.go:56] Caching tarball of preloaded images
	I1014 14:43:07.621976   52225 preload.go:172] Found /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 14:43:07.621986   52225 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1014 14:43:07.622077   52225 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kubernetes-upgrade-058309/config.json ...
	I1014 14:43:07.622097   52225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kubernetes-upgrade-058309/config.json: {Name:mk772b4e81cb68fe866be82f3e0c929609c58de4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 14:43:07.622278   52225 start.go:360] acquireMachinesLock for kubernetes-upgrade-058309: {Name:mk0b928e3a7c606d1470b2064477a22c5e59969d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 14:43:47.427358   52225 start.go:364] duration metric: took 39.805005965s to acquireMachinesLock for "kubernetes-upgrade-058309"
	I1014 14:43:47.427448   52225 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-058309 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-058309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 14:43:47.427584   52225 start.go:125] createHost starting for "" (driver="kvm2")
	I1014 14:43:47.429813   52225 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 14:43:47.430014   52225 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:43:47.430069   52225 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:43:47.446282   52225 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45727
	I1014 14:43:47.446796   52225 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:43:47.447338   52225 main.go:141] libmachine: Using API Version  1
	I1014 14:43:47.447359   52225 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:43:47.447682   52225 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:43:47.447869   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetMachineName
	I1014 14:43:47.448017   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .DriverName
	I1014 14:43:47.448159   52225 start.go:159] libmachine.API.Create for "kubernetes-upgrade-058309" (driver="kvm2")
	I1014 14:43:47.448191   52225 client.go:168] LocalClient.Create starting
	I1014 14:43:47.448226   52225 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem
	I1014 14:43:47.448277   52225 main.go:141] libmachine: Decoding PEM data...
	I1014 14:43:47.448298   52225 main.go:141] libmachine: Parsing certificate...
	I1014 14:43:47.448364   52225 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem
	I1014 14:43:47.448390   52225 main.go:141] libmachine: Decoding PEM data...
	I1014 14:43:47.448410   52225 main.go:141] libmachine: Parsing certificate...
	I1014 14:43:47.448437   52225 main.go:141] libmachine: Running pre-create checks...
	I1014 14:43:47.448450   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .PreCreateCheck
	I1014 14:43:47.448753   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetConfigRaw
	I1014 14:43:47.449181   52225 main.go:141] libmachine: Creating machine...
	I1014 14:43:47.449192   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .Create
	I1014 14:43:47.449320   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Creating KVM machine...
	I1014 14:43:47.450477   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | found existing default KVM network
	I1014 14:43:47.451745   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | I1014 14:43:47.451575   52843 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b7:69:49} reservation:<nil>}
	I1014 14:43:47.452698   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | I1014 14:43:47.452600   52843 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000223e10}
	I1014 14:43:47.452716   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | created network xml: 
	I1014 14:43:47.452724   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | <network>
	I1014 14:43:47.452731   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG |   <name>mk-kubernetes-upgrade-058309</name>
	I1014 14:43:47.452761   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG |   <dns enable='no'/>
	I1014 14:43:47.452778   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG |   
	I1014 14:43:47.452790   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1014 14:43:47.452801   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG |     <dhcp>
	I1014 14:43:47.452823   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1014 14:43:47.452835   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG |     </dhcp>
	I1014 14:43:47.452847   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG |   </ip>
	I1014 14:43:47.452854   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG |   
	I1014 14:43:47.452865   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | </network>
	I1014 14:43:47.452874   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | 
	I1014 14:43:47.458184   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | trying to create private KVM network mk-kubernetes-upgrade-058309 192.168.50.0/24...
	I1014 14:43:47.531235   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | private KVM network mk-kubernetes-upgrade-058309 192.168.50.0/24 created
	I1014 14:43:47.531298   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | I1014 14:43:47.531199   52843 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 14:43:47.531321   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Setting up store path in /home/jenkins/minikube-integration/19790-7836/.minikube/machines/kubernetes-upgrade-058309 ...
	I1014 14:43:47.531338   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Building disk image from file:///home/jenkins/minikube-integration/19790-7836/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1014 14:43:47.531439   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Downloading /home/jenkins/minikube-integration/19790-7836/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19790-7836/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1014 14:43:47.770199   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | I1014 14:43:47.770028   52843 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/kubernetes-upgrade-058309/id_rsa...
	I1014 14:43:48.002092   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | I1014 14:43:48.001974   52843 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/kubernetes-upgrade-058309/kubernetes-upgrade-058309.rawdisk...
	I1014 14:43:48.002126   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | Writing magic tar header
	I1014 14:43:48.002145   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | Writing SSH key tar header
	I1014 14:43:48.002164   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | I1014 14:43:48.002094   52843 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19790-7836/.minikube/machines/kubernetes-upgrade-058309 ...
	I1014 14:43:48.002182   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/kubernetes-upgrade-058309
	I1014 14:43:48.002264   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube/machines
	I1014 14:43:48.002293   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 14:43:48.002327   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube/machines/kubernetes-upgrade-058309 (perms=drwx------)
	I1014 14:43:48.002350   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836
	I1014 14:43:48.002362   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube/machines (perms=drwxr-xr-x)
	I1014 14:43:48.002373   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube (perms=drwxr-xr-x)
	I1014 14:43:48.002379   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836 (perms=drwxrwxr-x)
	I1014 14:43:48.002387   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1014 14:43:48.002393   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1014 14:43:48.002402   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Creating domain...
	I1014 14:43:48.002410   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1014 14:43:48.002415   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | Checking permissions on dir: /home/jenkins
	I1014 14:43:48.002424   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | Checking permissions on dir: /home
	I1014 14:43:48.002433   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | Skipping /home - not owner
	I1014 14:43:48.003742   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) define libvirt domain using xml: 
	I1014 14:43:48.003779   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) <domain type='kvm'>
	I1014 14:43:48.003799   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)   <name>kubernetes-upgrade-058309</name>
	I1014 14:43:48.003810   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)   <memory unit='MiB'>2200</memory>
	I1014 14:43:48.003824   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)   <vcpu>2</vcpu>
	I1014 14:43:48.003833   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)   <features>
	I1014 14:43:48.003839   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)     <acpi/>
	I1014 14:43:48.003849   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)     <apic/>
	I1014 14:43:48.003867   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)     <pae/>
	I1014 14:43:48.003879   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)     
	I1014 14:43:48.003908   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)   </features>
	I1014 14:43:48.003939   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)   <cpu mode='host-passthrough'>
	I1014 14:43:48.003951   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)   
	I1014 14:43:48.003957   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)   </cpu>
	I1014 14:43:48.003965   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)   <os>
	I1014 14:43:48.003971   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)     <type>hvm</type>
	I1014 14:43:48.003978   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)     <boot dev='cdrom'/>
	I1014 14:43:48.003996   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)     <boot dev='hd'/>
	I1014 14:43:48.004008   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)     <bootmenu enable='no'/>
	I1014 14:43:48.004016   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)   </os>
	I1014 14:43:48.004024   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)   <devices>
	I1014 14:43:48.004031   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)     <disk type='file' device='cdrom'>
	I1014 14:43:48.004048   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)       <source file='/home/jenkins/minikube-integration/19790-7836/.minikube/machines/kubernetes-upgrade-058309/boot2docker.iso'/>
	I1014 14:43:48.004059   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)       <target dev='hdc' bus='scsi'/>
	I1014 14:43:48.004067   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)       <readonly/>
	I1014 14:43:48.004075   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)     </disk>
	I1014 14:43:48.004083   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)     <disk type='file' device='disk'>
	I1014 14:43:48.004097   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1014 14:43:48.004113   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)       <source file='/home/jenkins/minikube-integration/19790-7836/.minikube/machines/kubernetes-upgrade-058309/kubernetes-upgrade-058309.rawdisk'/>
	I1014 14:43:48.004124   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)       <target dev='hda' bus='virtio'/>
	I1014 14:43:48.004132   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)     </disk>
	I1014 14:43:48.004140   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)     <interface type='network'>
	I1014 14:43:48.004160   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)       <source network='mk-kubernetes-upgrade-058309'/>
	I1014 14:43:48.004176   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)       <model type='virtio'/>
	I1014 14:43:48.004194   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)     </interface>
	I1014 14:43:48.004214   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)     <interface type='network'>
	I1014 14:43:48.004226   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)       <source network='default'/>
	I1014 14:43:48.004239   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)       <model type='virtio'/>
	I1014 14:43:48.004266   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)     </interface>
	I1014 14:43:48.004278   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)     <serial type='pty'>
	I1014 14:43:48.004291   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)       <target port='0'/>
	I1014 14:43:48.004305   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)     </serial>
	I1014 14:43:48.004313   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)     <console type='pty'>
	I1014 14:43:48.004329   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)       <target type='serial' port='0'/>
	I1014 14:43:48.004341   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)     </console>
	I1014 14:43:48.004356   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)     <rng model='virtio'>
	I1014 14:43:48.004373   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)       <backend model='random'>/dev/random</backend>
	I1014 14:43:48.004389   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)     </rng>
	I1014 14:43:48.004397   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)     
	I1014 14:43:48.004410   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)     
	I1014 14:43:48.004421   52225 main.go:141] libmachine: (kubernetes-upgrade-058309)   </devices>
	I1014 14:43:48.004430   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) </domain>
	I1014 14:43:48.004439   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) 
	I1014 14:43:48.008821   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined MAC address 52:54:00:d6:ab:55 in network default
	I1014 14:43:48.009473   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Ensuring networks are active...
	I1014 14:43:48.009495   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:43:48.010278   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Ensuring network default is active
	I1014 14:43:48.010762   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Ensuring network mk-kubernetes-upgrade-058309 is active
	I1014 14:43:48.011350   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Getting domain xml...
	I1014 14:43:48.012148   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Creating domain...
	I1014 14:43:49.252735   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Waiting to get IP...
	I1014 14:43:49.253635   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:43:49.254062   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | unable to find current IP address of domain kubernetes-upgrade-058309 in network mk-kubernetes-upgrade-058309
	I1014 14:43:49.254124   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | I1014 14:43:49.254049   52843 retry.go:31] will retry after 307.708049ms: waiting for machine to come up
	I1014 14:43:49.563821   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:43:49.564390   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | unable to find current IP address of domain kubernetes-upgrade-058309 in network mk-kubernetes-upgrade-058309
	I1014 14:43:49.564418   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | I1014 14:43:49.564342   52843 retry.go:31] will retry after 295.966692ms: waiting for machine to come up
	I1014 14:43:49.861936   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:43:49.862435   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | unable to find current IP address of domain kubernetes-upgrade-058309 in network mk-kubernetes-upgrade-058309
	I1014 14:43:49.862460   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | I1014 14:43:49.862407   52843 retry.go:31] will retry after 384.740718ms: waiting for machine to come up
	I1014 14:43:50.249042   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:43:50.249748   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | unable to find current IP address of domain kubernetes-upgrade-058309 in network mk-kubernetes-upgrade-058309
	I1014 14:43:50.249784   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | I1014 14:43:50.249688   52843 retry.go:31] will retry after 476.360076ms: waiting for machine to come up
	I1014 14:43:50.727222   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:43:50.727734   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | unable to find current IP address of domain kubernetes-upgrade-058309 in network mk-kubernetes-upgrade-058309
	I1014 14:43:50.727769   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | I1014 14:43:50.727680   52843 retry.go:31] will retry after 723.029682ms: waiting for machine to come up
	I1014 14:43:51.451764   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:43:51.452251   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | unable to find current IP address of domain kubernetes-upgrade-058309 in network mk-kubernetes-upgrade-058309
	I1014 14:43:51.452285   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | I1014 14:43:51.452191   52843 retry.go:31] will retry after 711.225312ms: waiting for machine to come up
	I1014 14:43:52.165519   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:43:52.166190   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | unable to find current IP address of domain kubernetes-upgrade-058309 in network mk-kubernetes-upgrade-058309
	I1014 14:43:52.166215   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | I1014 14:43:52.166146   52843 retry.go:31] will retry after 994.894555ms: waiting for machine to come up
	I1014 14:43:53.163275   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:43:53.163833   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | unable to find current IP address of domain kubernetes-upgrade-058309 in network mk-kubernetes-upgrade-058309
	I1014 14:43:53.163863   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | I1014 14:43:53.163781   52843 retry.go:31] will retry after 946.538451ms: waiting for machine to come up
	I1014 14:43:54.111888   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:43:54.112364   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | unable to find current IP address of domain kubernetes-upgrade-058309 in network mk-kubernetes-upgrade-058309
	I1014 14:43:54.112387   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | I1014 14:43:54.112316   52843 retry.go:31] will retry after 1.522892204s: waiting for machine to come up
	I1014 14:43:55.636617   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:43:55.637027   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | unable to find current IP address of domain kubernetes-upgrade-058309 in network mk-kubernetes-upgrade-058309
	I1014 14:43:55.637074   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | I1014 14:43:55.636984   52843 retry.go:31] will retry after 1.599112776s: waiting for machine to come up
	I1014 14:43:57.239308   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:43:57.239867   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | unable to find current IP address of domain kubernetes-upgrade-058309 in network mk-kubernetes-upgrade-058309
	I1014 14:43:57.239898   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | I1014 14:43:57.239820   52843 retry.go:31] will retry after 2.903468271s: waiting for machine to come up
	I1014 14:44:00.145243   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:44:00.145773   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | unable to find current IP address of domain kubernetes-upgrade-058309 in network mk-kubernetes-upgrade-058309
	I1014 14:44:00.145825   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | I1014 14:44:00.145736   52843 retry.go:31] will retry after 3.109644917s: waiting for machine to come up
	I1014 14:44:03.256512   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:44:03.256913   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | unable to find current IP address of domain kubernetes-upgrade-058309 in network mk-kubernetes-upgrade-058309
	I1014 14:44:03.256958   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | I1014 14:44:03.256859   52843 retry.go:31] will retry after 3.589309942s: waiting for machine to come up
	I1014 14:44:06.850514   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:44:06.851012   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | unable to find current IP address of domain kubernetes-upgrade-058309 in network mk-kubernetes-upgrade-058309
	I1014 14:44:06.851041   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | I1014 14:44:06.850963   52843 retry.go:31] will retry after 4.569558293s: waiting for machine to come up
	I1014 14:44:11.423615   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:44:11.424134   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Found IP for machine: 192.168.50.21
	I1014 14:44:11.424172   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has current primary IP address 192.168.50.21 and MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:44:11.424178   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Reserving static IP address...
	I1014 14:44:11.424495   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-058309", mac: "52:54:00:58:14:45", ip: "192.168.50.21"} in network mk-kubernetes-upgrade-058309
	I1014 14:44:11.497535   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Reserved static IP address: 192.168.50.21
	I1014 14:44:11.497558   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Waiting for SSH to be available...
	I1014 14:44:11.497581   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | Getting to WaitForSSH function...
	I1014 14:44:11.500241   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:44:11.500792   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:14:45", ip: ""} in network mk-kubernetes-upgrade-058309: {Iface:virbr2 ExpiryTime:2024-10-14 15:44:03 +0000 UTC Type:0 Mac:52:54:00:58:14:45 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:minikube Clientid:01:52:54:00:58:14:45}
	I1014 14:44:11.500842   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined IP address 192.168.50.21 and MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:44:11.501020   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | Using SSH client type: external
	I1014 14:44:11.501052   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/kubernetes-upgrade-058309/id_rsa (-rw-------)
	I1014 14:44:11.501101   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.21 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/kubernetes-upgrade-058309/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 14:44:11.501123   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | About to run SSH command:
	I1014 14:44:11.501142   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | exit 0
	I1014 14:44:11.626813   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | SSH cmd err, output: <nil>: 
	I1014 14:44:11.627125   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) KVM machine creation complete!
	I1014 14:44:11.627473   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetConfigRaw
	I1014 14:44:11.628059   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .DriverName
	I1014 14:44:11.628237   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .DriverName
	I1014 14:44:11.628368   52225 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1014 14:44:11.628379   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetState
	I1014 14:44:11.629616   52225 main.go:141] libmachine: Detecting operating system of created instance...
	I1014 14:44:11.629637   52225 main.go:141] libmachine: Waiting for SSH to be available...
	I1014 14:44:11.629642   52225 main.go:141] libmachine: Getting to WaitForSSH function...
	I1014 14:44:11.629650   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHHostname
	I1014 14:44:11.632078   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:44:11.632546   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:14:45", ip: ""} in network mk-kubernetes-upgrade-058309: {Iface:virbr2 ExpiryTime:2024-10-14 15:44:03 +0000 UTC Type:0 Mac:52:54:00:58:14:45 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:kubernetes-upgrade-058309 Clientid:01:52:54:00:58:14:45}
	I1014 14:44:11.632587   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined IP address 192.168.50.21 and MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:44:11.632740   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHPort
	I1014 14:44:11.632914   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHKeyPath
	I1014 14:44:11.633053   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHKeyPath
	I1014 14:44:11.633161   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHUsername
	I1014 14:44:11.633273   52225 main.go:141] libmachine: Using SSH client type: native
	I1014 14:44:11.633483   52225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I1014 14:44:11.633497   52225 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1014 14:44:11.738007   52225 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 14:44:11.738036   52225 main.go:141] libmachine: Detecting the provisioner...
	I1014 14:44:11.738046   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHHostname
	I1014 14:44:11.740967   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:44:11.741403   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:14:45", ip: ""} in network mk-kubernetes-upgrade-058309: {Iface:virbr2 ExpiryTime:2024-10-14 15:44:03 +0000 UTC Type:0 Mac:52:54:00:58:14:45 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:kubernetes-upgrade-058309 Clientid:01:52:54:00:58:14:45}
	I1014 14:44:11.741443   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined IP address 192.168.50.21 and MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:44:11.741625   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHPort
	I1014 14:44:11.741811   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHKeyPath
	I1014 14:44:11.741968   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHKeyPath
	I1014 14:44:11.742119   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHUsername
	I1014 14:44:11.742321   52225 main.go:141] libmachine: Using SSH client type: native
	I1014 14:44:11.742492   52225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I1014 14:44:11.742502   52225 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1014 14:44:11.847546   52225 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1014 14:44:11.847657   52225 main.go:141] libmachine: found compatible host: buildroot
	I1014 14:44:11.847672   52225 main.go:141] libmachine: Provisioning with buildroot...
	I1014 14:44:11.847684   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetMachineName
	I1014 14:44:11.847912   52225 buildroot.go:166] provisioning hostname "kubernetes-upgrade-058309"
	I1014 14:44:11.847941   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetMachineName
	I1014 14:44:11.848126   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHHostname
	I1014 14:44:11.851075   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:44:11.851392   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:14:45", ip: ""} in network mk-kubernetes-upgrade-058309: {Iface:virbr2 ExpiryTime:2024-10-14 15:44:03 +0000 UTC Type:0 Mac:52:54:00:58:14:45 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:kubernetes-upgrade-058309 Clientid:01:52:54:00:58:14:45}
	I1014 14:44:11.851421   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined IP address 192.168.50.21 and MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:44:11.851571   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHPort
	I1014 14:44:11.851730   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHKeyPath
	I1014 14:44:11.851929   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHKeyPath
	I1014 14:44:11.852046   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHUsername
	I1014 14:44:11.852222   52225 main.go:141] libmachine: Using SSH client type: native
	I1014 14:44:11.852394   52225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I1014 14:44:11.852407   52225 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-058309 && echo "kubernetes-upgrade-058309" | sudo tee /etc/hostname
	I1014 14:44:11.971904   52225 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-058309
	
	I1014 14:44:11.971950   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHHostname
	I1014 14:44:11.975150   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:44:11.975511   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:14:45", ip: ""} in network mk-kubernetes-upgrade-058309: {Iface:virbr2 ExpiryTime:2024-10-14 15:44:03 +0000 UTC Type:0 Mac:52:54:00:58:14:45 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:kubernetes-upgrade-058309 Clientid:01:52:54:00:58:14:45}
	I1014 14:44:11.975542   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined IP address 192.168.50.21 and MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:44:11.975735   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHPort
	I1014 14:44:11.975947   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHKeyPath
	I1014 14:44:11.976115   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHKeyPath
	I1014 14:44:11.976261   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHUsername
	I1014 14:44:11.976422   52225 main.go:141] libmachine: Using SSH client type: native
	I1014 14:44:11.976594   52225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I1014 14:44:11.976611   52225 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-058309' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-058309/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-058309' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 14:44:12.093624   52225 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 14:44:12.093671   52225 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 14:44:12.093699   52225 buildroot.go:174] setting up certificates
	I1014 14:44:12.093714   52225 provision.go:84] configureAuth start
	I1014 14:44:12.093723   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetMachineName
	I1014 14:44:12.093990   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetIP
	I1014 14:44:12.096658   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:44:12.097004   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:14:45", ip: ""} in network mk-kubernetes-upgrade-058309: {Iface:virbr2 ExpiryTime:2024-10-14 15:44:03 +0000 UTC Type:0 Mac:52:54:00:58:14:45 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:kubernetes-upgrade-058309 Clientid:01:52:54:00:58:14:45}
	I1014 14:44:12.097028   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined IP address 192.168.50.21 and MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:44:12.097185   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHHostname
	I1014 14:44:12.099250   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:44:12.099541   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:14:45", ip: ""} in network mk-kubernetes-upgrade-058309: {Iface:virbr2 ExpiryTime:2024-10-14 15:44:03 +0000 UTC Type:0 Mac:52:54:00:58:14:45 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:kubernetes-upgrade-058309 Clientid:01:52:54:00:58:14:45}
	I1014 14:44:12.099573   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined IP address 192.168.50.21 and MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:44:12.099708   52225 provision.go:143] copyHostCerts
	I1014 14:44:12.099759   52225 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 14:44:12.099781   52225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 14:44:12.099846   52225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 14:44:12.099967   52225 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 14:44:12.099979   52225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 14:44:12.100012   52225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 14:44:12.100105   52225 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 14:44:12.100116   52225 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 14:44:12.100147   52225 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 14:44:12.100239   52225 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-058309 san=[127.0.0.1 192.168.50.21 kubernetes-upgrade-058309 localhost minikube]
	I1014 14:44:12.262814   52225 provision.go:177] copyRemoteCerts
	I1014 14:44:12.262874   52225 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 14:44:12.262901   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHHostname
	I1014 14:44:12.265635   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:44:12.266028   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:14:45", ip: ""} in network mk-kubernetes-upgrade-058309: {Iface:virbr2 ExpiryTime:2024-10-14 15:44:03 +0000 UTC Type:0 Mac:52:54:00:58:14:45 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:kubernetes-upgrade-058309 Clientid:01:52:54:00:58:14:45}
	I1014 14:44:12.266059   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined IP address 192.168.50.21 and MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:44:12.266467   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHPort
	I1014 14:44:12.266714   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHKeyPath
	I1014 14:44:12.266953   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHUsername
	I1014 14:44:12.267150   52225 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/kubernetes-upgrade-058309/id_rsa Username:docker}
	I1014 14:44:12.354826   52225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 14:44:12.383156   52225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1014 14:44:12.410975   52225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 14:44:12.442208   52225 provision.go:87] duration metric: took 348.481528ms to configureAuth
	I1014 14:44:12.442241   52225 buildroot.go:189] setting minikube options for container-runtime
	I1014 14:44:12.442469   52225 config.go:182] Loaded profile config "kubernetes-upgrade-058309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1014 14:44:12.442570   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHHostname
	I1014 14:44:12.445666   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:44:12.446074   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:14:45", ip: ""} in network mk-kubernetes-upgrade-058309: {Iface:virbr2 ExpiryTime:2024-10-14 15:44:03 +0000 UTC Type:0 Mac:52:54:00:58:14:45 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:kubernetes-upgrade-058309 Clientid:01:52:54:00:58:14:45}
	I1014 14:44:12.446119   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined IP address 192.168.50.21 and MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:44:12.446264   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHPort
	I1014 14:44:12.446454   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHKeyPath
	I1014 14:44:12.446620   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHKeyPath
	I1014 14:44:12.446787   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHUsername
	I1014 14:44:12.446942   52225 main.go:141] libmachine: Using SSH client type: native
	I1014 14:44:12.447192   52225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I1014 14:44:12.447210   52225 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 14:44:12.696350   52225 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 14:44:12.696384   52225 main.go:141] libmachine: Checking connection to Docker...
	I1014 14:44:12.696397   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetURL
	I1014 14:44:12.697900   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | Using libvirt version 6000000
	I1014 14:44:12.700677   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:44:12.701102   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:14:45", ip: ""} in network mk-kubernetes-upgrade-058309: {Iface:virbr2 ExpiryTime:2024-10-14 15:44:03 +0000 UTC Type:0 Mac:52:54:00:58:14:45 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:kubernetes-upgrade-058309 Clientid:01:52:54:00:58:14:45}
	I1014 14:44:12.701136   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined IP address 192.168.50.21 and MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:44:12.701339   52225 main.go:141] libmachine: Docker is up and running!
	I1014 14:44:12.701357   52225 main.go:141] libmachine: Reticulating splines...
	I1014 14:44:12.701366   52225 client.go:171] duration metric: took 25.253163738s to LocalClient.Create
	I1014 14:44:12.701394   52225 start.go:167] duration metric: took 25.253235603s to libmachine.API.Create "kubernetes-upgrade-058309"
	I1014 14:44:12.701425   52225 start.go:293] postStartSetup for "kubernetes-upgrade-058309" (driver="kvm2")
	I1014 14:44:12.701448   52225 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 14:44:12.701474   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .DriverName
	I1014 14:44:12.701755   52225 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 14:44:12.701789   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHHostname
	I1014 14:44:12.704560   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:44:12.705015   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:14:45", ip: ""} in network mk-kubernetes-upgrade-058309: {Iface:virbr2 ExpiryTime:2024-10-14 15:44:03 +0000 UTC Type:0 Mac:52:54:00:58:14:45 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:kubernetes-upgrade-058309 Clientid:01:52:54:00:58:14:45}
	I1014 14:44:12.705053   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined IP address 192.168.50.21 and MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:44:12.705225   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHPort
	I1014 14:44:12.705417   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHKeyPath
	I1014 14:44:12.705596   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHUsername
	I1014 14:44:12.705837   52225 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/kubernetes-upgrade-058309/id_rsa Username:docker}
	I1014 14:44:12.790370   52225 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 14:44:12.797129   52225 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 14:44:12.797156   52225 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 14:44:12.797217   52225 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 14:44:12.797281   52225 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 14:44:12.797369   52225 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 14:44:12.810259   52225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 14:44:12.838848   52225 start.go:296] duration metric: took 137.403753ms for postStartSetup
	I1014 14:44:12.838922   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetConfigRaw
	I1014 14:44:12.839571   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetIP
	I1014 14:44:12.842303   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:44:12.842658   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:14:45", ip: ""} in network mk-kubernetes-upgrade-058309: {Iface:virbr2 ExpiryTime:2024-10-14 15:44:03 +0000 UTC Type:0 Mac:52:54:00:58:14:45 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:kubernetes-upgrade-058309 Clientid:01:52:54:00:58:14:45}
	I1014 14:44:12.842689   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined IP address 192.168.50.21 and MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:44:12.842950   52225 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kubernetes-upgrade-058309/config.json ...
	I1014 14:44:12.843225   52225 start.go:128] duration metric: took 25.415627287s to createHost
	I1014 14:44:12.843256   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHHostname
	I1014 14:44:12.846004   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:44:12.846399   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:14:45", ip: ""} in network mk-kubernetes-upgrade-058309: {Iface:virbr2 ExpiryTime:2024-10-14 15:44:03 +0000 UTC Type:0 Mac:52:54:00:58:14:45 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:kubernetes-upgrade-058309 Clientid:01:52:54:00:58:14:45}
	I1014 14:44:12.846428   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined IP address 192.168.50.21 and MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:44:12.846605   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHPort
	I1014 14:44:12.846794   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHKeyPath
	I1014 14:44:12.846961   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHKeyPath
	I1014 14:44:12.847077   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHUsername
	I1014 14:44:12.847238   52225 main.go:141] libmachine: Using SSH client type: native
	I1014 14:44:12.847473   52225 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.21 22 <nil> <nil>}
	I1014 14:44:12.847489   52225 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 14:44:12.956995   52225 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728917052.913199080
	
	I1014 14:44:12.957024   52225 fix.go:216] guest clock: 1728917052.913199080
	I1014 14:44:12.957037   52225 fix.go:229] Guest: 2024-10-14 14:44:12.91319908 +0000 UTC Remote: 2024-10-14 14:44:12.843243074 +0000 UTC m=+65.342098063 (delta=69.956006ms)
	I1014 14:44:12.957093   52225 fix.go:200] guest clock delta is within tolerance: 69.956006ms
	I1014 14:44:12.957107   52225 start.go:83] releasing machines lock for "kubernetes-upgrade-058309", held for 25.529703475s
	I1014 14:44:12.957151   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .DriverName
	I1014 14:44:12.957440   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetIP
	I1014 14:44:12.960851   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:44:12.961302   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:14:45", ip: ""} in network mk-kubernetes-upgrade-058309: {Iface:virbr2 ExpiryTime:2024-10-14 15:44:03 +0000 UTC Type:0 Mac:52:54:00:58:14:45 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:kubernetes-upgrade-058309 Clientid:01:52:54:00:58:14:45}
	I1014 14:44:12.961331   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined IP address 192.168.50.21 and MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:44:12.961512   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .DriverName
	I1014 14:44:12.962087   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .DriverName
	I1014 14:44:12.962281   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .DriverName
	I1014 14:44:12.962389   52225 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 14:44:12.962435   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHHostname
	I1014 14:44:12.962491   52225 ssh_runner.go:195] Run: cat /version.json
	I1014 14:44:12.962510   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHHostname
	I1014 14:44:12.965403   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:44:12.965687   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:44:12.966086   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:14:45", ip: ""} in network mk-kubernetes-upgrade-058309: {Iface:virbr2 ExpiryTime:2024-10-14 15:44:03 +0000 UTC Type:0 Mac:52:54:00:58:14:45 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:kubernetes-upgrade-058309 Clientid:01:52:54:00:58:14:45}
	I1014 14:44:12.966107   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined IP address 192.168.50.21 and MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:44:12.966148   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:14:45", ip: ""} in network mk-kubernetes-upgrade-058309: {Iface:virbr2 ExpiryTime:2024-10-14 15:44:03 +0000 UTC Type:0 Mac:52:54:00:58:14:45 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:kubernetes-upgrade-058309 Clientid:01:52:54:00:58:14:45}
	I1014 14:44:12.966179   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined IP address 192.168.50.21 and MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:44:12.966410   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHPort
	I1014 14:44:12.966632   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHPort
	I1014 14:44:12.966640   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHKeyPath
	I1014 14:44:12.966848   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHUsername
	I1014 14:44:12.966875   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHKeyPath
	I1014 14:44:12.967008   52225 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/kubernetes-upgrade-058309/id_rsa Username:docker}
	I1014 14:44:12.967294   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHUsername
	I1014 14:44:12.967432   52225 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/kubernetes-upgrade-058309/id_rsa Username:docker}
	I1014 14:44:13.090500   52225 ssh_runner.go:195] Run: systemctl --version
	I1014 14:44:13.096791   52225 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 14:44:13.272925   52225 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 14:44:13.279005   52225 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 14:44:13.279083   52225 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 14:44:13.296129   52225 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 14:44:13.296163   52225 start.go:495] detecting cgroup driver to use...
	I1014 14:44:13.296242   52225 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 14:44:13.313651   52225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 14:44:13.329535   52225 docker.go:217] disabling cri-docker service (if available) ...
	I1014 14:44:13.329590   52225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 14:44:13.343687   52225 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 14:44:13.359635   52225 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 14:44:13.483889   52225 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 14:44:13.657469   52225 docker.go:233] disabling docker service ...
	I1014 14:44:13.657556   52225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 14:44:13.674947   52225 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 14:44:13.688549   52225 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 14:44:13.867379   52225 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 14:44:14.016507   52225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 14:44:14.032987   52225 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 14:44:14.055058   52225 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1014 14:44:14.055137   52225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:44:14.071557   52225 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 14:44:14.071633   52225 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:44:14.083476   52225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:44:14.094927   52225 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:44:14.106892   52225 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 14:44:14.118434   52225 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 14:44:14.128067   52225 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 14:44:14.128130   52225 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 14:44:14.143411   52225 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 14:44:14.157989   52225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 14:44:14.317502   52225 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 14:44:14.422025   52225 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 14:44:14.422120   52225 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 14:44:14.428284   52225 start.go:563] Will wait 60s for crictl version
	I1014 14:44:14.428352   52225 ssh_runner.go:195] Run: which crictl
	I1014 14:44:14.433071   52225 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 14:44:14.487717   52225 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 14:44:14.487808   52225 ssh_runner.go:195] Run: crio --version
	I1014 14:44:14.520967   52225 ssh_runner.go:195] Run: crio --version
	I1014 14:44:14.562582   52225 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1014 14:44:14.564020   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetIP
	I1014 14:44:14.566984   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:44:14.567444   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:14:45", ip: ""} in network mk-kubernetes-upgrade-058309: {Iface:virbr2 ExpiryTime:2024-10-14 15:44:03 +0000 UTC Type:0 Mac:52:54:00:58:14:45 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:kubernetes-upgrade-058309 Clientid:01:52:54:00:58:14:45}
	I1014 14:44:14.567471   52225 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined IP address 192.168.50.21 and MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:44:14.567760   52225 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1014 14:44:14.573278   52225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 14:44:14.590507   52225 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-058309 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-058309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.21 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 14:44:14.590610   52225 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1014 14:44:14.590664   52225 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 14:44:14.642378   52225 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1014 14:44:14.642441   52225 ssh_runner.go:195] Run: which lz4
	I1014 14:44:14.646645   52225 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 14:44:14.651180   52225 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 14:44:14.651240   52225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1014 14:44:16.477514   52225 crio.go:462] duration metric: took 1.830911613s to copy over tarball
	I1014 14:44:16.477595   52225 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 14:44:19.102651   52225 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.625029279s)
	I1014 14:44:19.102678   52225 crio.go:469] duration metric: took 2.625132185s to extract the tarball
	I1014 14:44:19.102688   52225 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 14:44:19.145120   52225 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 14:44:19.190992   52225 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1014 14:44:19.191030   52225 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1014 14:44:19.191110   52225 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 14:44:19.191119   52225 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1014 14:44:19.191144   52225 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1014 14:44:19.191158   52225 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 14:44:19.191119   52225 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1014 14:44:19.191196   52225 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1014 14:44:19.191313   52225 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1014 14:44:19.191324   52225 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1014 14:44:19.194926   52225 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1014 14:44:19.194934   52225 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1014 14:44:19.194975   52225 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1014 14:44:19.194994   52225 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1014 14:44:19.194939   52225 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1014 14:44:19.195328   52225 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 14:44:19.195506   52225 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1014 14:44:19.195531   52225 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 14:44:19.371086   52225 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1014 14:44:19.371108   52225 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1014 14:44:19.379502   52225 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 14:44:19.392248   52225 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1014 14:44:19.406096   52225 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1014 14:44:19.416392   52225 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1014 14:44:19.441517   52225 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1014 14:44:19.460171   52225 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1014 14:44:19.460238   52225 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1014 14:44:19.460319   52225 ssh_runner.go:195] Run: which crictl
	I1014 14:44:19.524123   52225 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1014 14:44:19.524175   52225 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1014 14:44:19.524228   52225 ssh_runner.go:195] Run: which crictl
	I1014 14:44:19.568176   52225 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1014 14:44:19.568226   52225 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1014 14:44:19.568230   52225 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1014 14:44:19.568262   52225 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 14:44:19.568277   52225 ssh_runner.go:195] Run: which crictl
	I1014 14:44:19.568306   52225 ssh_runner.go:195] Run: which crictl
	I1014 14:44:19.588506   52225 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1014 14:44:19.588552   52225 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1014 14:44:19.588604   52225 ssh_runner.go:195] Run: which crictl
	I1014 14:44:19.642074   52225 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1014 14:44:19.642116   52225 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1014 14:44:19.642119   52225 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1014 14:44:19.642150   52225 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1014 14:44:19.642221   52225 ssh_runner.go:195] Run: which crictl
	I1014 14:44:19.642230   52225 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1014 14:44:19.642258   52225 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1014 14:44:19.642286   52225 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 14:44:19.642357   52225 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1014 14:44:19.642162   52225 ssh_runner.go:195] Run: which crictl
	I1014 14:44:19.642164   52225 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1014 14:44:19.668377   52225 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1014 14:44:19.788249   52225 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1014 14:44:19.788300   52225 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 14:44:19.788307   52225 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1014 14:44:19.788249   52225 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1014 14:44:19.788348   52225 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1014 14:44:19.788498   52225 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1014 14:44:19.817962   52225 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1014 14:44:19.942364   52225 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1014 14:44:19.942435   52225 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1014 14:44:19.942567   52225 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1014 14:44:19.942639   52225 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1014 14:44:19.942690   52225 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1014 14:44:19.942646   52225 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 14:44:19.972755   52225 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1014 14:44:20.062196   52225 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 14:44:20.116407   52225 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1014 14:44:20.116483   52225 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1014 14:44:20.116535   52225 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1014 14:44:20.116544   52225 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1014 14:44:20.116590   52225 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1014 14:44:20.116617   52225 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1014 14:44:20.125689   52225 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1014 14:44:20.265788   52225 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1014 14:44:20.265866   52225 cache_images.go:92] duration metric: took 1.074817659s to LoadCachedImages
	W1014 14:44:20.265959   52225 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0: no such file or directory
	I1014 14:44:20.265978   52225 kubeadm.go:934] updating node { 192.168.50.21 8443 v1.20.0 crio true true} ...
	I1014 14:44:20.266089   52225 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-058309 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.21
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-058309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 14:44:20.266168   52225 ssh_runner.go:195] Run: crio config
	I1014 14:44:20.323337   52225 cni.go:84] Creating CNI manager for ""
	I1014 14:44:20.323376   52225 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 14:44:20.323386   52225 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 14:44:20.323404   52225 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.21 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-058309 NodeName:kubernetes-upgrade-058309 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.21"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.21 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1014 14:44:20.323594   52225 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.21
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-058309"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.21
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.21"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 14:44:20.323668   52225 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1014 14:44:20.335502   52225 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 14:44:20.335575   52225 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 14:44:20.347492   52225 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I1014 14:44:20.368167   52225 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 14:44:20.386579   52225 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1014 14:44:20.405718   52225 ssh_runner.go:195] Run: grep 192.168.50.21	control-plane.minikube.internal$ /etc/hosts
	I1014 14:44:20.410123   52225 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.21	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 14:44:20.423493   52225 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 14:44:20.571324   52225 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 14:44:20.589923   52225 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kubernetes-upgrade-058309 for IP: 192.168.50.21
	I1014 14:44:20.589958   52225 certs.go:194] generating shared ca certs ...
	I1014 14:44:20.589980   52225 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 14:44:20.590162   52225 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 14:44:20.590232   52225 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 14:44:20.590249   52225 certs.go:256] generating profile certs ...
	I1014 14:44:20.590319   52225 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kubernetes-upgrade-058309/client.key
	I1014 14:44:20.590352   52225 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kubernetes-upgrade-058309/client.crt with IP's: []
	I1014 14:44:20.665281   52225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kubernetes-upgrade-058309/client.crt ...
	I1014 14:44:20.665321   52225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kubernetes-upgrade-058309/client.crt: {Name:mk73c35a5ac14e4b917f659ee225f1a4a43e9d07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 14:44:20.665537   52225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kubernetes-upgrade-058309/client.key ...
	I1014 14:44:20.665562   52225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kubernetes-upgrade-058309/client.key: {Name:mk091321cf788461a16ac9c8e26f97f74c2ff692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 14:44:20.665695   52225 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kubernetes-upgrade-058309/apiserver.key.e9d61ec1
	I1014 14:44:20.665721   52225 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kubernetes-upgrade-058309/apiserver.crt.e9d61ec1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.21]
	I1014 14:44:20.839725   52225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kubernetes-upgrade-058309/apiserver.crt.e9d61ec1 ...
	I1014 14:44:20.839762   52225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kubernetes-upgrade-058309/apiserver.crt.e9d61ec1: {Name:mk692621293f758c06fdc8c472389032fcb72128 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 14:44:20.839966   52225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kubernetes-upgrade-058309/apiserver.key.e9d61ec1 ...
	I1014 14:44:20.839991   52225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kubernetes-upgrade-058309/apiserver.key.e9d61ec1: {Name:mkbac559108237b7e76bf4b06bc1edf5c778dc20 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 14:44:20.840095   52225 certs.go:381] copying /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kubernetes-upgrade-058309/apiserver.crt.e9d61ec1 -> /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kubernetes-upgrade-058309/apiserver.crt
	I1014 14:44:20.840231   52225 certs.go:385] copying /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kubernetes-upgrade-058309/apiserver.key.e9d61ec1 -> /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kubernetes-upgrade-058309/apiserver.key
	I1014 14:44:20.840329   52225 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kubernetes-upgrade-058309/proxy-client.key
	I1014 14:44:20.840353   52225 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kubernetes-upgrade-058309/proxy-client.crt with IP's: []
	I1014 14:44:20.987980   52225 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kubernetes-upgrade-058309/proxy-client.crt ...
	I1014 14:44:20.988015   52225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kubernetes-upgrade-058309/proxy-client.crt: {Name:mk05d7d99eeb19cea66f679190eb1631645ed3bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 14:44:20.988204   52225 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kubernetes-upgrade-058309/proxy-client.key ...
	I1014 14:44:20.988224   52225 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kubernetes-upgrade-058309/proxy-client.key: {Name:mked64f21efb3d80a6ba239b87b3c64419c51e4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 14:44:20.988429   52225 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 14:44:20.988488   52225 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 14:44:20.988505   52225 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 14:44:20.988538   52225 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 14:44:20.988571   52225 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 14:44:20.988603   52225 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 14:44:20.988658   52225 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 14:44:20.989398   52225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 14:44:21.026698   52225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 14:44:21.064070   52225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 14:44:21.096412   52225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 14:44:21.127356   52225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kubernetes-upgrade-058309/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1014 14:44:21.158625   52225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kubernetes-upgrade-058309/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 14:44:21.189422   52225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kubernetes-upgrade-058309/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 14:44:21.220402   52225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kubernetes-upgrade-058309/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 14:44:21.249055   52225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 14:44:21.279339   52225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 14:44:21.306044   52225 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 14:44:21.336815   52225 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 14:44:21.356072   52225 ssh_runner.go:195] Run: openssl version
	I1014 14:44:21.362354   52225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 14:44:21.374434   52225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 14:44:21.379548   52225 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 14:44:21.379624   52225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 14:44:21.386338   52225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 14:44:21.398346   52225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 14:44:21.410887   52225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 14:44:21.416371   52225 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 14:44:21.416435   52225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 14:44:21.422920   52225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 14:44:21.435196   52225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 14:44:21.447371   52225 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 14:44:21.452745   52225 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 14:44:21.452814   52225 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 14:44:21.459118   52225 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 14:44:21.471370   52225 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 14:44:21.476487   52225 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 14:44:21.476556   52225 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-058309 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-058309 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.21 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 14:44:21.476655   52225 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 14:44:21.476724   52225 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 14:44:21.518248   52225 cri.go:89] found id: ""
	I1014 14:44:21.518326   52225 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 14:44:21.535878   52225 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 14:44:21.555445   52225 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 14:44:21.571105   52225 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 14:44:21.571135   52225 kubeadm.go:157] found existing configuration files:
	
	I1014 14:44:21.571201   52225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 14:44:21.585957   52225 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 14:44:21.586043   52225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 14:44:21.602554   52225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 14:44:21.624109   52225 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 14:44:21.624216   52225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 14:44:21.648576   52225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 14:44:21.661263   52225 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 14:44:21.661353   52225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 14:44:21.675347   52225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 14:44:21.685319   52225 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 14:44:21.685391   52225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 14:44:21.695450   52225 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 14:44:21.844330   52225 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1014 14:44:21.844721   52225 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 14:44:22.014321   52225 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 14:44:22.014511   52225 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 14:44:22.014664   52225 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1014 14:44:22.254200   52225 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 14:44:22.256497   52225 out.go:235]   - Generating certificates and keys ...
	I1014 14:44:22.256678   52225 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 14:44:22.256780   52225 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 14:44:22.353532   52225 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 14:44:22.562266   52225 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1014 14:44:22.967606   52225 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1014 14:44:23.052582   52225 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1014 14:44:23.287767   52225 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1014 14:44:23.288084   52225 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-058309 localhost] and IPs [192.168.50.21 127.0.0.1 ::1]
	I1014 14:44:23.457716   52225 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1014 14:44:23.458033   52225 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-058309 localhost] and IPs [192.168.50.21 127.0.0.1 ::1]
	I1014 14:44:23.691919   52225 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 14:44:23.791373   52225 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 14:44:24.002883   52225 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1014 14:44:24.003087   52225 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 14:44:24.255278   52225 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 14:44:24.797067   52225 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 14:44:24.918875   52225 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 14:44:24.979633   52225 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 14:44:24.998058   52225 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 14:44:24.999663   52225 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 14:44:24.999755   52225 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 14:44:25.149836   52225 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 14:44:25.151875   52225 out.go:235]   - Booting up control plane ...
	I1014 14:44:25.152060   52225 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 14:44:25.167685   52225 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 14:44:25.170242   52225 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 14:44:25.171806   52225 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 14:44:25.184828   52225 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1014 14:45:05.150465   52225 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1014 14:45:05.151183   52225 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 14:45:05.151417   52225 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 14:45:10.150672   52225 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 14:45:10.151001   52225 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 14:45:20.149900   52225 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 14:45:20.150124   52225 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 14:45:40.150266   52225 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 14:45:40.150533   52225 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 14:46:20.149453   52225 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 14:46:20.150092   52225 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 14:46:20.150150   52225 kubeadm.go:310] 
	I1014 14:46:20.150274   52225 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1014 14:46:20.150378   52225 kubeadm.go:310] 		timed out waiting for the condition
	I1014 14:46:20.150389   52225 kubeadm.go:310] 
	I1014 14:46:20.150473   52225 kubeadm.go:310] 	This error is likely caused by:
	I1014 14:46:20.150546   52225 kubeadm.go:310] 		- The kubelet is not running
	I1014 14:46:20.150789   52225 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1014 14:46:20.150807   52225 kubeadm.go:310] 
	I1014 14:46:20.151052   52225 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1014 14:46:20.151135   52225 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1014 14:46:20.151211   52225 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1014 14:46:20.151227   52225 kubeadm.go:310] 
	I1014 14:46:20.151463   52225 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1014 14:46:20.151650   52225 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 14:46:20.151665   52225 kubeadm.go:310] 
	I1014 14:46:20.151889   52225 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1014 14:46:20.152101   52225 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 14:46:20.152269   52225 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1014 14:46:20.152441   52225 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1014 14:46:20.152455   52225 kubeadm.go:310] 
	I1014 14:46:20.152694   52225 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 14:46:20.152890   52225 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1014 14:46:20.153085   52225 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1014 14:46:20.153675   52225 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-058309 localhost] and IPs [192.168.50.21 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-058309 localhost] and IPs [192.168.50.21 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-058309 localhost] and IPs [192.168.50.21 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-058309 localhost] and IPs [192.168.50.21 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1014 14:46:20.153735   52225 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 14:46:21.518104   52225 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.364330686s)
	I1014 14:46:21.518191   52225 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 14:46:21.535791   52225 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 14:46:21.546203   52225 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 14:46:21.546227   52225 kubeadm.go:157] found existing configuration files:
	
	I1014 14:46:21.546286   52225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 14:46:21.555894   52225 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 14:46:21.555960   52225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 14:46:21.565851   52225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 14:46:21.578778   52225 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 14:46:21.578838   52225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 14:46:21.591997   52225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 14:46:21.602122   52225 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 14:46:21.602181   52225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 14:46:21.612032   52225 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 14:46:21.621764   52225 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 14:46:21.621823   52225 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 14:46:21.634081   52225 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 14:46:21.713853   52225 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1014 14:46:21.713930   52225 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 14:46:21.868210   52225 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 14:46:21.868354   52225 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 14:46:21.868514   52225 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1014 14:46:22.050994   52225 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 14:46:22.053263   52225 out.go:235]   - Generating certificates and keys ...
	I1014 14:46:22.053376   52225 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 14:46:22.053473   52225 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 14:46:22.053585   52225 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 14:46:22.053682   52225 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1014 14:46:22.053774   52225 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 14:46:22.053847   52225 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1014 14:46:22.053907   52225 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1014 14:46:22.053961   52225 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1014 14:46:22.054051   52225 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 14:46:22.054441   52225 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 14:46:22.054492   52225 kubeadm.go:310] [certs] Using the existing "sa" key
	I1014 14:46:22.054576   52225 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 14:46:22.145002   52225 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 14:46:22.273266   52225 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 14:46:22.385067   52225 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 14:46:22.566276   52225 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 14:46:22.580935   52225 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 14:46:22.581919   52225 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 14:46:22.581977   52225 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 14:46:22.747071   52225 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 14:46:22.750044   52225 out.go:235]   - Booting up control plane ...
	I1014 14:46:22.750176   52225 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 14:46:22.758665   52225 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 14:46:22.758789   52225 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 14:46:22.760550   52225 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 14:46:22.768950   52225 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1014 14:47:02.771189   52225 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1014 14:47:02.771816   52225 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 14:47:02.772113   52225 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 14:47:07.772623   52225 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 14:47:07.772886   52225 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 14:47:17.773498   52225 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 14:47:17.773739   52225 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 14:47:37.775233   52225 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 14:47:37.775532   52225 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 14:48:17.775099   52225 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 14:48:17.775314   52225 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 14:48:17.775327   52225 kubeadm.go:310] 
	I1014 14:48:17.775377   52225 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1014 14:48:17.775437   52225 kubeadm.go:310] 		timed out waiting for the condition
	I1014 14:48:17.775452   52225 kubeadm.go:310] 
	I1014 14:48:17.775502   52225 kubeadm.go:310] 	This error is likely caused by:
	I1014 14:48:17.775546   52225 kubeadm.go:310] 		- The kubelet is not running
	I1014 14:48:17.775716   52225 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1014 14:48:17.775741   52225 kubeadm.go:310] 
	I1014 14:48:17.775866   52225 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1014 14:48:17.775918   52225 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1014 14:48:17.775970   52225 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1014 14:48:17.775980   52225 kubeadm.go:310] 
	I1014 14:48:17.776114   52225 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1014 14:48:17.776235   52225 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 14:48:17.776245   52225 kubeadm.go:310] 
	I1014 14:48:17.776396   52225 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1014 14:48:17.776532   52225 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 14:48:17.776636   52225 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1014 14:48:17.776733   52225 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1014 14:48:17.776743   52225 kubeadm.go:310] 
	I1014 14:48:17.777351   52225 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 14:48:17.777450   52225 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1014 14:48:17.777523   52225 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1014 14:48:17.777591   52225 kubeadm.go:394] duration metric: took 3m56.301043512s to StartCluster
	I1014 14:48:17.777653   52225 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 14:48:17.777720   52225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 14:48:17.824647   52225 cri.go:89] found id: ""
	I1014 14:48:17.824676   52225 logs.go:282] 0 containers: []
	W1014 14:48:17.824687   52225 logs.go:284] No container was found matching "kube-apiserver"
	I1014 14:48:17.824694   52225 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 14:48:17.824756   52225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 14:48:17.865273   52225 cri.go:89] found id: ""
	I1014 14:48:17.865314   52225 logs.go:282] 0 containers: []
	W1014 14:48:17.865325   52225 logs.go:284] No container was found matching "etcd"
	I1014 14:48:17.865333   52225 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 14:48:17.865396   52225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 14:48:17.906941   52225 cri.go:89] found id: ""
	I1014 14:48:17.906970   52225 logs.go:282] 0 containers: []
	W1014 14:48:17.906978   52225 logs.go:284] No container was found matching "coredns"
	I1014 14:48:17.906984   52225 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 14:48:17.907044   52225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 14:48:17.941924   52225 cri.go:89] found id: ""
	I1014 14:48:17.941955   52225 logs.go:282] 0 containers: []
	W1014 14:48:17.941964   52225 logs.go:284] No container was found matching "kube-scheduler"
	I1014 14:48:17.941972   52225 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 14:48:17.942035   52225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 14:48:17.979426   52225 cri.go:89] found id: ""
	I1014 14:48:17.979455   52225 logs.go:282] 0 containers: []
	W1014 14:48:17.979465   52225 logs.go:284] No container was found matching "kube-proxy"
	I1014 14:48:17.979472   52225 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 14:48:17.979521   52225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 14:48:18.017506   52225 cri.go:89] found id: ""
	I1014 14:48:18.017536   52225 logs.go:282] 0 containers: []
	W1014 14:48:18.017545   52225 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 14:48:18.017554   52225 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 14:48:18.017622   52225 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 14:48:18.054785   52225 cri.go:89] found id: ""
	I1014 14:48:18.054815   52225 logs.go:282] 0 containers: []
	W1014 14:48:18.054828   52225 logs.go:284] No container was found matching "kindnet"
	I1014 14:48:18.054840   52225 logs.go:123] Gathering logs for dmesg ...
	I1014 14:48:18.054856   52225 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 14:48:18.068551   52225 logs.go:123] Gathering logs for describe nodes ...
	I1014 14:48:18.068585   52225 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 14:48:18.190073   52225 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 14:48:18.190098   52225 logs.go:123] Gathering logs for CRI-O ...
	I1014 14:48:18.190112   52225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 14:48:18.298387   52225 logs.go:123] Gathering logs for container status ...
	I1014 14:48:18.298420   52225 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 14:48:18.350736   52225 logs.go:123] Gathering logs for kubelet ...
	I1014 14:48:18.350768   52225 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1014 14:48:18.406999   52225 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1014 14:48:18.407059   52225 out.go:270] * 
	* 
	W1014 14:48:18.407123   52225 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 14:48:18.407141   52225 out.go:270] * 
	* 
	W1014 14:48:18.408062   52225 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 14:48:18.411329   52225 out.go:201] 
	W1014 14:48:18.412637   52225 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 14:48:18.412684   52225 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1014 14:48:18.412708   52225 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1014 14:48:18.414296   52225 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-058309 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-058309
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-058309: (1.419477461s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-058309 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-058309 status --format={{.Host}}: exit status 7 (65.727261ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-058309 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1014 14:48:20.071112   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/functional-917108/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-058309 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (44.671332087s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-058309 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-058309 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-058309 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (90.933099ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-058309] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19790-7836/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-7836/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-058309
	    minikube start -p kubernetes-upgrade-058309 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0583092 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-058309 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-058309 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-058309 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (40.021924733s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-10-14 14:49:44.811835786 +0000 UTC m=+4269.093184133
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-058309 -n kubernetes-upgrade-058309
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-058309 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-058309 logs -n 25: (1.948676841s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p running-upgrade-833927             | running-upgrade-833927    | jenkins | v1.34.0 | 14 Oct 24 14:45 UTC | 14 Oct 24 14:46 UTC |
	| start   | -p cert-expiration-750530             | cert-expiration-750530    | jenkins | v1.34.0 | 14 Oct 24 14:46 UTC | 14 Oct 24 14:46 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p force-systemd-flag-273294          | force-systemd-flag-273294 | jenkins | v1.34.0 | 14 Oct 24 14:46 UTC | 14 Oct 24 14:47 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-229138 sudo           | NoKubernetes-229138       | jenkins | v1.34.0 | 14 Oct 24 14:46 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-229138                | NoKubernetes-229138       | jenkins | v1.34.0 | 14 Oct 24 14:46 UTC | 14 Oct 24 14:46 UTC |
	| start   | -p cert-options-914285                | cert-options-914285       | jenkins | v1.34.0 | 14 Oct 24 14:46 UTC | 14 Oct 24 14:47 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-273294 ssh cat     | force-systemd-flag-273294 | jenkins | v1.34.0 | 14 Oct 24 14:47 UTC | 14 Oct 24 14:47 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-273294          | force-systemd-flag-273294 | jenkins | v1.34.0 | 14 Oct 24 14:47 UTC | 14 Oct 24 14:47 UTC |
	| start   | -p pause-329024 --memory=2048         | pause-329024              | jenkins | v1.34.0 | 14 Oct 24 14:47 UTC | 14 Oct 24 14:48 UTC |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=all --driver=kvm2              |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-914285 ssh               | cert-options-914285       | jenkins | v1.34.0 | 14 Oct 24 14:47 UTC | 14 Oct 24 14:47 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-914285 -- sudo        | cert-options-914285       | jenkins | v1.34.0 | 14 Oct 24 14:47 UTC | 14 Oct 24 14:47 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-914285                | cert-options-914285       | jenkins | v1.34.0 | 14 Oct 24 14:47 UTC | 14 Oct 24 14:47 UTC |
	| start   | -p auto-517678 --memory=3072          | auto-517678               | jenkins | v1.34.0 | 14 Oct 24 14:47 UTC | 14 Oct 24 14:49 UTC |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-058309          | kubernetes-upgrade-058309 | jenkins | v1.34.0 | 14 Oct 24 14:48 UTC | 14 Oct 24 14:48 UTC |
	| start   | -p kubernetes-upgrade-058309          | kubernetes-upgrade-058309 | jenkins | v1.34.0 | 14 Oct 24 14:48 UTC | 14 Oct 24 14:49 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-329024                       | pause-329024              | jenkins | v1.34.0 | 14 Oct 24 14:48 UTC | 14 Oct 24 14:49 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-058309          | kubernetes-upgrade-058309 | jenkins | v1.34.0 | 14 Oct 24 14:49 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-058309          | kubernetes-upgrade-058309 | jenkins | v1.34.0 | 14 Oct 24 14:49 UTC | 14 Oct 24 14:49 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1          |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| pause   | -p pause-329024                       | pause-329024              | jenkins | v1.34.0 | 14 Oct 24 14:49 UTC | 14 Oct 24 14:49 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| unpause | -p pause-329024                       | pause-329024              | jenkins | v1.34.0 | 14 Oct 24 14:49 UTC | 14 Oct 24 14:49 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| pause   | -p pause-329024                       | pause-329024              | jenkins | v1.34.0 | 14 Oct 24 14:49 UTC | 14 Oct 24 14:49 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| delete  | -p pause-329024                       | pause-329024              | jenkins | v1.34.0 | 14 Oct 24 14:49 UTC | 14 Oct 24 14:49 UTC |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	| delete  | -p pause-329024                       | pause-329024              | jenkins | v1.34.0 | 14 Oct 24 14:49 UTC | 14 Oct 24 14:49 UTC |
	| start   | -p kindnet-517678                     | kindnet-517678            | jenkins | v1.34.0 | 14 Oct 24 14:49 UTC |                     |
	|         | --memory=3072                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                    |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2           |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p auto-517678 pgrep -a               | auto-517678               | jenkins | v1.34.0 | 14 Oct 24 14:49 UTC | 14 Oct 24 14:49 UTC |
	|         | kubelet                               |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 14:49:24
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 14:49:24.965888   57628 out.go:345] Setting OutFile to fd 1 ...
	I1014 14:49:24.966001   57628 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:49:24.966009   57628 out.go:358] Setting ErrFile to fd 2...
	I1014 14:49:24.966014   57628 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:49:24.966211   57628 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
	I1014 14:49:24.966841   57628 out.go:352] Setting JSON to false
	I1014 14:49:24.967733   57628 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5515,"bootTime":1728911850,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 14:49:24.967830   57628 start.go:139] virtualization: kvm guest
	I1014 14:49:24.970197   57628 out.go:177] * [kindnet-517678] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1014 14:49:24.971500   57628 notify.go:220] Checking for updates...
	I1014 14:49:24.971540   57628 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 14:49:24.973115   57628 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 14:49:24.974505   57628 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 14:49:24.975754   57628 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 14:49:24.976998   57628 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 14:49:24.978186   57628 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 14:49:24.979749   57628 config.go:182] Loaded profile config "auto-517678": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 14:49:24.979845   57628 config.go:182] Loaded profile config "cert-expiration-750530": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 14:49:24.979947   57628 config.go:182] Loaded profile config "kubernetes-upgrade-058309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 14:49:24.980037   57628 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 14:49:25.017611   57628 out.go:177] * Using the kvm2 driver based on user configuration
	I1014 14:49:25.019187   57628 start.go:297] selected driver: kvm2
	I1014 14:49:25.019204   57628 start.go:901] validating driver "kvm2" against <nil>
	I1014 14:49:25.019214   57628 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 14:49:25.020126   57628 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 14:49:25.020219   57628 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19790-7836/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 14:49:25.036421   57628 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1014 14:49:25.036505   57628 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 14:49:25.036820   57628 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 14:49:25.036856   57628 cni.go:84] Creating CNI manager for "kindnet"
	I1014 14:49:25.036863   57628 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 14:49:25.036940   57628 start.go:340] cluster config:
	{Name:kindnet-517678 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-517678 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 14:49:25.037055   57628 iso.go:125] acquiring lock: {Name:mk2e2e780b05ead4007a93e6b56d28c6081926bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 14:49:25.039109   57628 out.go:177] * Starting "kindnet-517678" primary control-plane node in "kindnet-517678" cluster
	I1014 14:49:25.040529   57628 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 14:49:25.040582   57628 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1014 14:49:25.040592   57628 cache.go:56] Caching tarball of preloaded images
	I1014 14:49:25.040689   57628 preload.go:172] Found /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 14:49:25.040699   57628 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1014 14:49:25.040790   57628 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kindnet-517678/config.json ...
	I1014 14:49:25.040806   57628 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kindnet-517678/config.json: {Name:mk3d202fbf8f616b4df964c255ba9ddf8f1c944b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 14:49:25.040935   57628 start.go:360] acquireMachinesLock for kindnet-517678: {Name:mk0b928e3a7c606d1470b2064477a22c5e59969d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 14:49:25.040964   57628 start.go:364] duration metric: took 15.705µs to acquireMachinesLock for "kindnet-517678"
	I1014 14:49:25.040979   57628 start.go:93] Provisioning new machine with config: &{Name:kindnet-517678 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:kindnet-517678 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 14:49:25.041038   57628 start.go:125] createHost starting for "" (driver="kvm2")
	I1014 14:49:22.404736   56347 pod_ready.go:103] pod "coredns-7c65d6cfc9-86xd4" in "kube-system" namespace has status "Ready":"False"
	I1014 14:49:24.904511   56347 pod_ready.go:103] pod "coredns-7c65d6cfc9-86xd4" in "kube-system" namespace has status "Ready":"False"
	I1014 14:49:25.914571   56347 pod_ready.go:93] pod "coredns-7c65d6cfc9-86xd4" in "kube-system" namespace has status "Ready":"True"
	I1014 14:49:25.914618   56347 pod_ready.go:82] duration metric: took 39.516913029s for pod "coredns-7c65d6cfc9-86xd4" in "kube-system" namespace to be "Ready" ...
	I1014 14:49:25.914632   56347 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-mpr94" in "kube-system" namespace to be "Ready" ...
	I1014 14:49:25.925745   56347 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-mpr94" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-mpr94" not found
	I1014 14:49:25.925778   56347 pod_ready.go:82] duration metric: took 11.136909ms for pod "coredns-7c65d6cfc9-mpr94" in "kube-system" namespace to be "Ready" ...
	E1014 14:49:25.925792   56347 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-mpr94" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-mpr94" not found
	I1014 14:49:25.925801   56347 pod_ready.go:79] waiting up to 15m0s for pod "etcd-auto-517678" in "kube-system" namespace to be "Ready" ...
	I1014 14:49:25.932741   56347 pod_ready.go:93] pod "etcd-auto-517678" in "kube-system" namespace has status "Ready":"True"
	I1014 14:49:25.932764   56347 pod_ready.go:82] duration metric: took 6.955969ms for pod "etcd-auto-517678" in "kube-system" namespace to be "Ready" ...
	I1014 14:49:25.932773   56347 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-auto-517678" in "kube-system" namespace to be "Ready" ...
	I1014 14:49:25.938399   56347 pod_ready.go:93] pod "kube-apiserver-auto-517678" in "kube-system" namespace has status "Ready":"True"
	I1014 14:49:25.938420   56347 pod_ready.go:82] duration metric: took 5.640899ms for pod "kube-apiserver-auto-517678" in "kube-system" namespace to be "Ready" ...
	I1014 14:49:25.938429   56347 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-auto-517678" in "kube-system" namespace to be "Ready" ...
	I1014 14:49:25.943041   56347 pod_ready.go:93] pod "kube-controller-manager-auto-517678" in "kube-system" namespace has status "Ready":"True"
	I1014 14:49:25.943068   56347 pod_ready.go:82] duration metric: took 4.630341ms for pod "kube-controller-manager-auto-517678" in "kube-system" namespace to be "Ready" ...
	I1014 14:49:25.943081   56347 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-dphhm" in "kube-system" namespace to be "Ready" ...
	I1014 14:49:26.102317   56347 pod_ready.go:93] pod "kube-proxy-dphhm" in "kube-system" namespace has status "Ready":"True"
	I1014 14:49:26.102348   56347 pod_ready.go:82] duration metric: took 159.257789ms for pod "kube-proxy-dphhm" in "kube-system" namespace to be "Ready" ...
	I1014 14:49:26.102361   56347 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-auto-517678" in "kube-system" namespace to be "Ready" ...
	I1014 14:49:26.502333   56347 pod_ready.go:93] pod "kube-scheduler-auto-517678" in "kube-system" namespace has status "Ready":"True"
	I1014 14:49:26.502355   56347 pod_ready.go:82] duration metric: took 399.986536ms for pod "kube-scheduler-auto-517678" in "kube-system" namespace to be "Ready" ...
	I1014 14:49:26.502362   56347 pod_ready.go:39] duration metric: took 40.12194638s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 14:49:26.502377   56347 api_server.go:52] waiting for apiserver process to appear ...
	I1014 14:49:26.502433   56347 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 14:49:26.517003   56347 api_server.go:72] duration metric: took 40.79457036s to wait for apiserver process to appear ...
	I1014 14:49:26.517038   56347 api_server.go:88] waiting for apiserver healthz status ...
	I1014 14:49:26.517068   56347 api_server.go:253] Checking apiserver healthz at https://192.168.83.109:8443/healthz ...
	I1014 14:49:26.523165   56347 api_server.go:279] https://192.168.83.109:8443/healthz returned 200:
	ok
	I1014 14:49:26.524117   56347 api_server.go:141] control plane version: v1.31.1
	I1014 14:49:26.524139   56347 api_server.go:131] duration metric: took 7.093906ms to wait for apiserver health ...
	I1014 14:49:26.524147   56347 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 14:49:26.708047   56347 system_pods.go:59] 7 kube-system pods found
	I1014 14:49:26.708074   56347 system_pods.go:61] "coredns-7c65d6cfc9-86xd4" [e93299cd-1d12-47a4-98fb-adb60e652fe3] Running
	I1014 14:49:26.708079   56347 system_pods.go:61] "etcd-auto-517678" [44754a6c-34b0-47da-a135-d4c7cfe070d4] Running
	I1014 14:49:26.708083   56347 system_pods.go:61] "kube-apiserver-auto-517678" [b3a74615-8753-476d-98f1-9a705127fb7a] Running
	I1014 14:49:26.708086   56347 system_pods.go:61] "kube-controller-manager-auto-517678" [9a10fe26-b2bd-4f0e-bddc-b74b0de038e3] Running
	I1014 14:49:26.708090   56347 system_pods.go:61] "kube-proxy-dphhm" [7e6cc39e-9851-44b8-a564-972cb9c039d6] Running
	I1014 14:49:26.708092   56347 system_pods.go:61] "kube-scheduler-auto-517678" [31e7ede4-7c7d-4899-af11-142add6552ce] Running
	I1014 14:49:26.708097   56347 system_pods.go:61] "storage-provisioner" [08eb117c-53ba-4370-8ace-322f2b279072] Running
	I1014 14:49:26.708105   56347 system_pods.go:74] duration metric: took 183.952105ms to wait for pod list to return data ...
	I1014 14:49:26.708113   56347 default_sa.go:34] waiting for default service account to be created ...
	I1014 14:49:26.901555   56347 default_sa.go:45] found service account: "default"
	I1014 14:49:26.901593   56347 default_sa.go:55] duration metric: took 193.469278ms for default service account to be created ...
	I1014 14:49:26.901608   56347 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 14:49:27.104615   56347 system_pods.go:86] 7 kube-system pods found
	I1014 14:49:27.104646   56347 system_pods.go:89] "coredns-7c65d6cfc9-86xd4" [e93299cd-1d12-47a4-98fb-adb60e652fe3] Running
	I1014 14:49:27.104654   56347 system_pods.go:89] "etcd-auto-517678" [44754a6c-34b0-47da-a135-d4c7cfe070d4] Running
	I1014 14:49:27.104660   56347 system_pods.go:89] "kube-apiserver-auto-517678" [b3a74615-8753-476d-98f1-9a705127fb7a] Running
	I1014 14:49:27.104665   56347 system_pods.go:89] "kube-controller-manager-auto-517678" [9a10fe26-b2bd-4f0e-bddc-b74b0de038e3] Running
	I1014 14:49:27.104685   56347 system_pods.go:89] "kube-proxy-dphhm" [7e6cc39e-9851-44b8-a564-972cb9c039d6] Running
	I1014 14:49:27.104693   56347 system_pods.go:89] "kube-scheduler-auto-517678" [31e7ede4-7c7d-4899-af11-142add6552ce] Running
	I1014 14:49:27.104699   56347 system_pods.go:89] "storage-provisioner" [08eb117c-53ba-4370-8ace-322f2b279072] Running
	I1014 14:49:27.104713   56347 system_pods.go:126] duration metric: took 203.096164ms to wait for k8s-apps to be running ...
	I1014 14:49:27.104726   56347 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 14:49:27.104785   56347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 14:49:27.120326   56347 system_svc.go:56] duration metric: took 15.592788ms WaitForService to wait for kubelet
	I1014 14:49:27.120353   56347 kubeadm.go:582] duration metric: took 41.397928809s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 14:49:27.120370   56347 node_conditions.go:102] verifying NodePressure condition ...
	I1014 14:49:27.302804   56347 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 14:49:27.302834   56347 node_conditions.go:123] node cpu capacity is 2
	I1014 14:49:27.302844   56347 node_conditions.go:105] duration metric: took 182.469586ms to run NodePressure ...
	I1014 14:49:27.302854   56347 start.go:241] waiting for startup goroutines ...
	I1014 14:49:27.302860   56347 start.go:246] waiting for cluster config update ...
	I1014 14:49:27.302874   56347 start.go:255] writing updated cluster config ...
	I1014 14:49:27.303217   56347 ssh_runner.go:195] Run: rm -f paused
	I1014 14:49:27.355585   56347 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1014 14:49:27.358148   56347 out.go:177] * Done! kubectl is now configured to use "auto-517678" cluster and "default" namespace by default
	I1014 14:49:25.042809   57628 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1014 14:49:25.042948   57628 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:49:25.042982   57628 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:49:25.058091   57628 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42761
	I1014 14:49:25.058524   57628 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:49:25.059107   57628 main.go:141] libmachine: Using API Version  1
	I1014 14:49:25.059134   57628 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:49:25.059452   57628 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:49:25.059681   57628 main.go:141] libmachine: (kindnet-517678) Calling .GetMachineName
	I1014 14:49:25.059862   57628 main.go:141] libmachine: (kindnet-517678) Calling .DriverName
	I1014 14:49:25.060016   57628 start.go:159] libmachine.API.Create for "kindnet-517678" (driver="kvm2")
	I1014 14:49:25.060068   57628 client.go:168] LocalClient.Create starting
	I1014 14:49:25.060104   57628 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem
	I1014 14:49:25.060134   57628 main.go:141] libmachine: Decoding PEM data...
	I1014 14:49:25.060151   57628 main.go:141] libmachine: Parsing certificate...
	I1014 14:49:25.060223   57628 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem
	I1014 14:49:25.060247   57628 main.go:141] libmachine: Decoding PEM data...
	I1014 14:49:25.060260   57628 main.go:141] libmachine: Parsing certificate...
	I1014 14:49:25.060278   57628 main.go:141] libmachine: Running pre-create checks...
	I1014 14:49:25.060297   57628 main.go:141] libmachine: (kindnet-517678) Calling .PreCreateCheck
	I1014 14:49:25.060694   57628 main.go:141] libmachine: (kindnet-517678) Calling .GetConfigRaw
	I1014 14:49:25.061052   57628 main.go:141] libmachine: Creating machine...
	I1014 14:49:25.061065   57628 main.go:141] libmachine: (kindnet-517678) Calling .Create
	I1014 14:49:25.061239   57628 main.go:141] libmachine: (kindnet-517678) Creating KVM machine...
	I1014 14:49:25.062555   57628 main.go:141] libmachine: (kindnet-517678) DBG | found existing default KVM network
	I1014 14:49:25.064122   57628 main.go:141] libmachine: (kindnet-517678) DBG | I1014 14:49:25.063979   57651 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:e8:92:47} reservation:<nil>}
	I1014 14:49:25.065164   57628 main.go:141] libmachine: (kindnet-517678) DBG | I1014 14:49:25.065047   57651 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:16:83:9f} reservation:<nil>}
	I1014 14:49:25.066340   57628 main.go:141] libmachine: (kindnet-517678) DBG | I1014 14:49:25.066272   57651 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000283120}
	I1014 14:49:25.066366   57628 main.go:141] libmachine: (kindnet-517678) DBG | created network xml: 
	I1014 14:49:25.066377   57628 main.go:141] libmachine: (kindnet-517678) DBG | <network>
	I1014 14:49:25.066396   57628 main.go:141] libmachine: (kindnet-517678) DBG |   <name>mk-kindnet-517678</name>
	I1014 14:49:25.066408   57628 main.go:141] libmachine: (kindnet-517678) DBG |   <dns enable='no'/>
	I1014 14:49:25.066439   57628 main.go:141] libmachine: (kindnet-517678) DBG |   
	I1014 14:49:25.066459   57628 main.go:141] libmachine: (kindnet-517678) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1014 14:49:25.066466   57628 main.go:141] libmachine: (kindnet-517678) DBG |     <dhcp>
	I1014 14:49:25.066478   57628 main.go:141] libmachine: (kindnet-517678) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1014 14:49:25.066486   57628 main.go:141] libmachine: (kindnet-517678) DBG |     </dhcp>
	I1014 14:49:25.066490   57628 main.go:141] libmachine: (kindnet-517678) DBG |   </ip>
	I1014 14:49:25.066495   57628 main.go:141] libmachine: (kindnet-517678) DBG |   
	I1014 14:49:25.066501   57628 main.go:141] libmachine: (kindnet-517678) DBG | </network>
	I1014 14:49:25.066509   57628 main.go:141] libmachine: (kindnet-517678) DBG | 
	I1014 14:49:25.071876   57628 main.go:141] libmachine: (kindnet-517678) DBG | trying to create private KVM network mk-kindnet-517678 192.168.61.0/24...
	I1014 14:49:25.141401   57628 main.go:141] libmachine: (kindnet-517678) DBG | private KVM network mk-kindnet-517678 192.168.61.0/24 created
	I1014 14:49:25.141435   57628 main.go:141] libmachine: (kindnet-517678) Setting up store path in /home/jenkins/minikube-integration/19790-7836/.minikube/machines/kindnet-517678 ...
	I1014 14:49:25.141449   57628 main.go:141] libmachine: (kindnet-517678) DBG | I1014 14:49:25.141382   57651 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 14:49:25.141469   57628 main.go:141] libmachine: (kindnet-517678) Building disk image from file:///home/jenkins/minikube-integration/19790-7836/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1014 14:49:25.141587   57628 main.go:141] libmachine: (kindnet-517678) Downloading /home/jenkins/minikube-integration/19790-7836/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19790-7836/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1014 14:49:25.395544   57628 main.go:141] libmachine: (kindnet-517678) DBG | I1014 14:49:25.395427   57651 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/kindnet-517678/id_rsa...
	I1014 14:49:25.461823   57628 main.go:141] libmachine: (kindnet-517678) DBG | I1014 14:49:25.461665   57651 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/kindnet-517678/kindnet-517678.rawdisk...
	I1014 14:49:25.461861   57628 main.go:141] libmachine: (kindnet-517678) DBG | Writing magic tar header
	I1014 14:49:25.461878   57628 main.go:141] libmachine: (kindnet-517678) DBG | Writing SSH key tar header
	I1014 14:49:25.461891   57628 main.go:141] libmachine: (kindnet-517678) DBG | I1014 14:49:25.461814   57651 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19790-7836/.minikube/machines/kindnet-517678 ...
	I1014 14:49:25.461954   57628 main.go:141] libmachine: (kindnet-517678) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/kindnet-517678
	I1014 14:49:25.462001   57628 main.go:141] libmachine: (kindnet-517678) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube/machines
	I1014 14:49:25.462024   57628 main.go:141] libmachine: (kindnet-517678) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 14:49:25.462038   57628 main.go:141] libmachine: (kindnet-517678) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube/machines/kindnet-517678 (perms=drwx------)
	I1014 14:49:25.462063   57628 main.go:141] libmachine: (kindnet-517678) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube/machines (perms=drwxr-xr-x)
	I1014 14:49:25.462084   57628 main.go:141] libmachine: (kindnet-517678) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube (perms=drwxr-xr-x)
	I1014 14:49:25.462105   57628 main.go:141] libmachine: (kindnet-517678) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836 (perms=drwxrwxr-x)
	I1014 14:49:25.462118   57628 main.go:141] libmachine: (kindnet-517678) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1014 14:49:25.462128   57628 main.go:141] libmachine: (kindnet-517678) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836
	I1014 14:49:25.462144   57628 main.go:141] libmachine: (kindnet-517678) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1014 14:49:25.462155   57628 main.go:141] libmachine: (kindnet-517678) DBG | Checking permissions on dir: /home/jenkins
	I1014 14:49:25.462167   57628 main.go:141] libmachine: (kindnet-517678) DBG | Checking permissions on dir: /home
	I1014 14:49:25.462177   57628 main.go:141] libmachine: (kindnet-517678) DBG | Skipping /home - not owner
	I1014 14:49:25.462224   57628 main.go:141] libmachine: (kindnet-517678) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1014 14:49:25.462244   57628 main.go:141] libmachine: (kindnet-517678) Creating domain...
	I1014 14:49:25.463267   57628 main.go:141] libmachine: (kindnet-517678) define libvirt domain using xml: 
	I1014 14:49:25.463287   57628 main.go:141] libmachine: (kindnet-517678) <domain type='kvm'>
	I1014 14:49:25.463297   57628 main.go:141] libmachine: (kindnet-517678)   <name>kindnet-517678</name>
	I1014 14:49:25.463304   57628 main.go:141] libmachine: (kindnet-517678)   <memory unit='MiB'>3072</memory>
	I1014 14:49:25.463343   57628 main.go:141] libmachine: (kindnet-517678)   <vcpu>2</vcpu>
	I1014 14:49:25.463365   57628 main.go:141] libmachine: (kindnet-517678)   <features>
	I1014 14:49:25.463379   57628 main.go:141] libmachine: (kindnet-517678)     <acpi/>
	I1014 14:49:25.463433   57628 main.go:141] libmachine: (kindnet-517678)     <apic/>
	I1014 14:49:25.463445   57628 main.go:141] libmachine: (kindnet-517678)     <pae/>
	I1014 14:49:25.463453   57628 main.go:141] libmachine: (kindnet-517678)     
	I1014 14:49:25.463461   57628 main.go:141] libmachine: (kindnet-517678)   </features>
	I1014 14:49:25.463472   57628 main.go:141] libmachine: (kindnet-517678)   <cpu mode='host-passthrough'>
	I1014 14:49:25.463481   57628 main.go:141] libmachine: (kindnet-517678)   
	I1014 14:49:25.463489   57628 main.go:141] libmachine: (kindnet-517678)   </cpu>
	I1014 14:49:25.463497   57628 main.go:141] libmachine: (kindnet-517678)   <os>
	I1014 14:49:25.463506   57628 main.go:141] libmachine: (kindnet-517678)     <type>hvm</type>
	I1014 14:49:25.463510   57628 main.go:141] libmachine: (kindnet-517678)     <boot dev='cdrom'/>
	I1014 14:49:25.463517   57628 main.go:141] libmachine: (kindnet-517678)     <boot dev='hd'/>
	I1014 14:49:25.463525   57628 main.go:141] libmachine: (kindnet-517678)     <bootmenu enable='no'/>
	I1014 14:49:25.463555   57628 main.go:141] libmachine: (kindnet-517678)   </os>
	I1014 14:49:25.463571   57628 main.go:141] libmachine: (kindnet-517678)   <devices>
	I1014 14:49:25.463585   57628 main.go:141] libmachine: (kindnet-517678)     <disk type='file' device='cdrom'>
	I1014 14:49:25.463607   57628 main.go:141] libmachine: (kindnet-517678)       <source file='/home/jenkins/minikube-integration/19790-7836/.minikube/machines/kindnet-517678/boot2docker.iso'/>
	I1014 14:49:25.463637   57628 main.go:141] libmachine: (kindnet-517678)       <target dev='hdc' bus='scsi'/>
	I1014 14:49:25.463651   57628 main.go:141] libmachine: (kindnet-517678)       <readonly/>
	I1014 14:49:25.463663   57628 main.go:141] libmachine: (kindnet-517678)     </disk>
	I1014 14:49:25.463672   57628 main.go:141] libmachine: (kindnet-517678)     <disk type='file' device='disk'>
	I1014 14:49:25.463687   57628 main.go:141] libmachine: (kindnet-517678)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1014 14:49:25.463702   57628 main.go:141] libmachine: (kindnet-517678)       <source file='/home/jenkins/minikube-integration/19790-7836/.minikube/machines/kindnet-517678/kindnet-517678.rawdisk'/>
	I1014 14:49:25.463775   57628 main.go:141] libmachine: (kindnet-517678)       <target dev='hda' bus='virtio'/>
	I1014 14:49:25.463808   57628 main.go:141] libmachine: (kindnet-517678)     </disk>
	I1014 14:49:25.463821   57628 main.go:141] libmachine: (kindnet-517678)     <interface type='network'>
	I1014 14:49:25.463832   57628 main.go:141] libmachine: (kindnet-517678)       <source network='mk-kindnet-517678'/>
	I1014 14:49:25.463843   57628 main.go:141] libmachine: (kindnet-517678)       <model type='virtio'/>
	I1014 14:49:25.463850   57628 main.go:141] libmachine: (kindnet-517678)     </interface>
	I1014 14:49:25.463856   57628 main.go:141] libmachine: (kindnet-517678)     <interface type='network'>
	I1014 14:49:25.463862   57628 main.go:141] libmachine: (kindnet-517678)       <source network='default'/>
	I1014 14:49:25.463867   57628 main.go:141] libmachine: (kindnet-517678)       <model type='virtio'/>
	I1014 14:49:25.463876   57628 main.go:141] libmachine: (kindnet-517678)     </interface>
	I1014 14:49:25.463887   57628 main.go:141] libmachine: (kindnet-517678)     <serial type='pty'>
	I1014 14:49:25.463901   57628 main.go:141] libmachine: (kindnet-517678)       <target port='0'/>
	I1014 14:49:25.463912   57628 main.go:141] libmachine: (kindnet-517678)     </serial>
	I1014 14:49:25.463921   57628 main.go:141] libmachine: (kindnet-517678)     <console type='pty'>
	I1014 14:49:25.463933   57628 main.go:141] libmachine: (kindnet-517678)       <target type='serial' port='0'/>
	I1014 14:49:25.463942   57628 main.go:141] libmachine: (kindnet-517678)     </console>
	I1014 14:49:25.463952   57628 main.go:141] libmachine: (kindnet-517678)     <rng model='virtio'>
	I1014 14:49:25.463962   57628 main.go:141] libmachine: (kindnet-517678)       <backend model='random'>/dev/random</backend>
	I1014 14:49:25.463973   57628 main.go:141] libmachine: (kindnet-517678)     </rng>
	I1014 14:49:25.463985   57628 main.go:141] libmachine: (kindnet-517678)     
	I1014 14:49:25.463995   57628 main.go:141] libmachine: (kindnet-517678)     
	I1014 14:49:25.464004   57628 main.go:141] libmachine: (kindnet-517678)   </devices>
	I1014 14:49:25.464014   57628 main.go:141] libmachine: (kindnet-517678) </domain>
	I1014 14:49:25.464023   57628 main.go:141] libmachine: (kindnet-517678) 
	I1014 14:49:25.468159   57628 main.go:141] libmachine: (kindnet-517678) DBG | domain kindnet-517678 has defined MAC address 52:54:00:c9:7b:ee in network default
	I1014 14:49:25.468800   57628 main.go:141] libmachine: (kindnet-517678) Ensuring networks are active...
	I1014 14:49:25.468822   57628 main.go:141] libmachine: (kindnet-517678) DBG | domain kindnet-517678 has defined MAC address 52:54:00:dd:c5:6d in network mk-kindnet-517678
	I1014 14:49:25.469539   57628 main.go:141] libmachine: (kindnet-517678) Ensuring network default is active
	I1014 14:49:25.469880   57628 main.go:141] libmachine: (kindnet-517678) Ensuring network mk-kindnet-517678 is active
	I1014 14:49:25.470419   57628 main.go:141] libmachine: (kindnet-517678) Getting domain xml...
	I1014 14:49:25.471304   57628 main.go:141] libmachine: (kindnet-517678) Creating domain...
	I1014 14:49:26.720757   57628 main.go:141] libmachine: (kindnet-517678) Waiting to get IP...
	I1014 14:49:26.721584   57628 main.go:141] libmachine: (kindnet-517678) DBG | domain kindnet-517678 has defined MAC address 52:54:00:dd:c5:6d in network mk-kindnet-517678
	I1014 14:49:26.722071   57628 main.go:141] libmachine: (kindnet-517678) DBG | unable to find current IP address of domain kindnet-517678 in network mk-kindnet-517678
	I1014 14:49:26.722099   57628 main.go:141] libmachine: (kindnet-517678) DBG | I1014 14:49:26.722017   57651 retry.go:31] will retry after 233.401362ms: waiting for machine to come up
	I1014 14:49:26.957491   57628 main.go:141] libmachine: (kindnet-517678) DBG | domain kindnet-517678 has defined MAC address 52:54:00:dd:c5:6d in network mk-kindnet-517678
	I1014 14:49:26.958096   57628 main.go:141] libmachine: (kindnet-517678) DBG | unable to find current IP address of domain kindnet-517678 in network mk-kindnet-517678
	I1014 14:49:26.958127   57628 main.go:141] libmachine: (kindnet-517678) DBG | I1014 14:49:26.958048   57651 retry.go:31] will retry after 384.597075ms: waiting for machine to come up
	I1014 14:49:27.344581   57628 main.go:141] libmachine: (kindnet-517678) DBG | domain kindnet-517678 has defined MAC address 52:54:00:dd:c5:6d in network mk-kindnet-517678
	I1014 14:49:27.345077   57628 main.go:141] libmachine: (kindnet-517678) DBG | unable to find current IP address of domain kindnet-517678 in network mk-kindnet-517678
	I1014 14:49:27.345106   57628 main.go:141] libmachine: (kindnet-517678) DBG | I1014 14:49:27.345043   57651 retry.go:31] will retry after 375.865233ms: waiting for machine to come up
	I1014 14:49:27.722713   57628 main.go:141] libmachine: (kindnet-517678) DBG | domain kindnet-517678 has defined MAC address 52:54:00:dd:c5:6d in network mk-kindnet-517678
	I1014 14:49:27.723253   57628 main.go:141] libmachine: (kindnet-517678) DBG | unable to find current IP address of domain kindnet-517678 in network mk-kindnet-517678
	I1014 14:49:27.723282   57628 main.go:141] libmachine: (kindnet-517678) DBG | I1014 14:49:27.723205   57651 retry.go:31] will retry after 494.299349ms: waiting for machine to come up
	I1014 14:49:28.219059   57628 main.go:141] libmachine: (kindnet-517678) DBG | domain kindnet-517678 has defined MAC address 52:54:00:dd:c5:6d in network mk-kindnet-517678
	I1014 14:49:28.219650   57628 main.go:141] libmachine: (kindnet-517678) DBG | unable to find current IP address of domain kindnet-517678 in network mk-kindnet-517678
	I1014 14:49:28.219680   57628 main.go:141] libmachine: (kindnet-517678) DBG | I1014 14:49:28.219586   57651 retry.go:31] will retry after 704.211059ms: waiting for machine to come up
	I1014 14:49:28.926217   57628 main.go:141] libmachine: (kindnet-517678) DBG | domain kindnet-517678 has defined MAC address 52:54:00:dd:c5:6d in network mk-kindnet-517678
	I1014 14:49:28.927065   57628 main.go:141] libmachine: (kindnet-517678) DBG | unable to find current IP address of domain kindnet-517678 in network mk-kindnet-517678
	I1014 14:49:28.927094   57628 main.go:141] libmachine: (kindnet-517678) DBG | I1014 14:49:28.927020   57651 retry.go:31] will retry after 648.344109ms: waiting for machine to come up
	I1014 14:49:29.576804   57628 main.go:141] libmachine: (kindnet-517678) DBG | domain kindnet-517678 has defined MAC address 52:54:00:dd:c5:6d in network mk-kindnet-517678
	I1014 14:49:29.577273   57628 main.go:141] libmachine: (kindnet-517678) DBG | unable to find current IP address of domain kindnet-517678 in network mk-kindnet-517678
	I1014 14:49:29.577314   57628 main.go:141] libmachine: (kindnet-517678) DBG | I1014 14:49:29.577245   57651 retry.go:31] will retry after 718.705412ms: waiting for machine to come up
	I1014 14:49:30.297929   57628 main.go:141] libmachine: (kindnet-517678) DBG | domain kindnet-517678 has defined MAC address 52:54:00:dd:c5:6d in network mk-kindnet-517678
	I1014 14:49:30.298401   57628 main.go:141] libmachine: (kindnet-517678) DBG | unable to find current IP address of domain kindnet-517678 in network mk-kindnet-517678
	I1014 14:49:30.298427   57628 main.go:141] libmachine: (kindnet-517678) DBG | I1014 14:49:30.298348   57651 retry.go:31] will retry after 1.028113631s: waiting for machine to come up
	I1014 14:49:31.328087   57628 main.go:141] libmachine: (kindnet-517678) DBG | domain kindnet-517678 has defined MAC address 52:54:00:dd:c5:6d in network mk-kindnet-517678
	I1014 14:49:31.328662   57628 main.go:141] libmachine: (kindnet-517678) DBG | unable to find current IP address of domain kindnet-517678 in network mk-kindnet-517678
	I1014 14:49:31.328683   57628 main.go:141] libmachine: (kindnet-517678) DBG | I1014 14:49:31.328603   57651 retry.go:31] will retry after 1.14771312s: waiting for machine to come up
	I1014 14:49:32.477428   57628 main.go:141] libmachine: (kindnet-517678) DBG | domain kindnet-517678 has defined MAC address 52:54:00:dd:c5:6d in network mk-kindnet-517678
	I1014 14:49:32.477835   57628 main.go:141] libmachine: (kindnet-517678) DBG | unable to find current IP address of domain kindnet-517678 in network mk-kindnet-517678
	I1014 14:49:32.477862   57628 main.go:141] libmachine: (kindnet-517678) DBG | I1014 14:49:32.477790   57651 retry.go:31] will retry after 1.425512206s: waiting for machine to come up
	I1014 14:49:33.904654   57628 main.go:141] libmachine: (kindnet-517678) DBG | domain kindnet-517678 has defined MAC address 52:54:00:dd:c5:6d in network mk-kindnet-517678
	I1014 14:49:33.905146   57628 main.go:141] libmachine: (kindnet-517678) DBG | unable to find current IP address of domain kindnet-517678 in network mk-kindnet-517678
	I1014 14:49:33.905172   57628 main.go:141] libmachine: (kindnet-517678) DBG | I1014 14:49:33.905098   57651 retry.go:31] will retry after 2.861847155s: waiting for machine to come up
	I1014 14:49:36.091059   57174 ssh_runner.go:235] Completed: sudo /usr/bin/crictl stop --timeout=10 88ab529b63edc11cfeecfa3f559f0be763330fb34abfdf0691523f2240ad201a d7b6b5c13e23378728d84e7abf3db40ff888ef1a8421a5ca4484b540657417b2 71e048c416be0baa18fe4e008b5529914d01f0e7f2a2ecbd4a55ed8905bb1ab2 d5c832f52fe15e014ec29af0e3a8bc968e6e4dcf1c81f16101fee0bfdaa98ec9 9929f871b60ebe0d0932b286f86502ec9d925f728aa785166ffa93c3bb63fc2c 23448dc575bae199f2ab87412c9b266c124ca3e69a232778dbf7c2ca241ef5e6 50a9a22a0b9d4e486cb7b05d7c68ab4396b7ee2510b01d194b01824b8379572e b79910e970381ebcf1fa6fea1502c0689d8653ba6d32b950aac7209a7f098bd3 5c97e8d8f448311c2115b6210ffd0d4ad91ba80614371ab3254cf761f653c7b7 0aaf4b11e23ae5119c7ee4385388385a10fa8bd28c07748314dc8e825f88bbe7: (15.153369379s)
	W1014 14:49:36.091139   57174 kubeadm.go:644] Failed to stop kube-system containers, port conflicts may arise: stop: crictl: sudo /usr/bin/crictl stop --timeout=10 88ab529b63edc11cfeecfa3f559f0be763330fb34abfdf0691523f2240ad201a d7b6b5c13e23378728d84e7abf3db40ff888ef1a8421a5ca4484b540657417b2 71e048c416be0baa18fe4e008b5529914d01f0e7f2a2ecbd4a55ed8905bb1ab2 d5c832f52fe15e014ec29af0e3a8bc968e6e4dcf1c81f16101fee0bfdaa98ec9 9929f871b60ebe0d0932b286f86502ec9d925f728aa785166ffa93c3bb63fc2c 23448dc575bae199f2ab87412c9b266c124ca3e69a232778dbf7c2ca241ef5e6 50a9a22a0b9d4e486cb7b05d7c68ab4396b7ee2510b01d194b01824b8379572e b79910e970381ebcf1fa6fea1502c0689d8653ba6d32b950aac7209a7f098bd3 5c97e8d8f448311c2115b6210ffd0d4ad91ba80614371ab3254cf761f653c7b7 0aaf4b11e23ae5119c7ee4385388385a10fa8bd28c07748314dc8e825f88bbe7: Process exited with status 1
	stdout:
	88ab529b63edc11cfeecfa3f559f0be763330fb34abfdf0691523f2240ad201a
	d7b6b5c13e23378728d84e7abf3db40ff888ef1a8421a5ca4484b540657417b2
	71e048c416be0baa18fe4e008b5529914d01f0e7f2a2ecbd4a55ed8905bb1ab2
	d5c832f52fe15e014ec29af0e3a8bc968e6e4dcf1c81f16101fee0bfdaa98ec9
	9929f871b60ebe0d0932b286f86502ec9d925f728aa785166ffa93c3bb63fc2c
	23448dc575bae199f2ab87412c9b266c124ca3e69a232778dbf7c2ca241ef5e6
	50a9a22a0b9d4e486cb7b05d7c68ab4396b7ee2510b01d194b01824b8379572e
	b79910e970381ebcf1fa6fea1502c0689d8653ba6d32b950aac7209a7f098bd3
	
	stderr:
	E1014 14:49:36.078593    3847 remote_runtime.go:366] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c97e8d8f448311c2115b6210ffd0d4ad91ba80614371ab3254cf761f653c7b7\": container with ID starting with 5c97e8d8f448311c2115b6210ffd0d4ad91ba80614371ab3254cf761f653c7b7 not found: ID does not exist" containerID="5c97e8d8f448311c2115b6210ffd0d4ad91ba80614371ab3254cf761f653c7b7"
	time="2024-10-14T14:49:36Z" level=fatal msg="stopping the container \"5c97e8d8f448311c2115b6210ffd0d4ad91ba80614371ab3254cf761f653c7b7\": rpc error: code = NotFound desc = could not find container \"5c97e8d8f448311c2115b6210ffd0d4ad91ba80614371ab3254cf761f653c7b7\": container with ID starting with 5c97e8d8f448311c2115b6210ffd0d4ad91ba80614371ab3254cf761f653c7b7 not found: ID does not exist"
	I1014 14:49:36.091246   57174 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1014 14:49:36.140076   57174 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 14:49:36.152169   57174 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Oct 14 14:48 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Oct 14 14:48 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5755 Oct 14 14:48 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Oct 14 14:48 /etc/kubernetes/scheduler.conf
	
	I1014 14:49:36.152249   57174 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 14:49:36.162114   57174 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 14:49:36.172381   57174 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 14:49:36.182694   57174 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1014 14:49:36.182761   57174 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 14:49:36.193303   57174 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 14:49:36.203395   57174 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1014 14:49:36.203458   57174 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 14:49:36.213339   57174 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 14:49:36.223644   57174 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 14:49:36.284297   57174 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 14:49:37.761480   57174 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.477144603s)
	I1014 14:49:37.761516   57174 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1014 14:49:38.000046   57174 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 14:49:38.071099   57174 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1014 14:49:38.179718   57174 api_server.go:52] waiting for apiserver process to appear ...
	I1014 14:49:38.179822   57174 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 14:49:38.680618   57174 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 14:49:39.180208   57174 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 14:49:39.199834   57174 api_server.go:72] duration metric: took 1.020114376s to wait for apiserver process to appear ...
	I1014 14:49:39.199863   57174 api_server.go:88] waiting for apiserver healthz status ...
	I1014 14:49:39.199890   57174 api_server.go:253] Checking apiserver healthz at https://192.168.50.21:8443/healthz ...
	I1014 14:49:36.769132   57628 main.go:141] libmachine: (kindnet-517678) DBG | domain kindnet-517678 has defined MAC address 52:54:00:dd:c5:6d in network mk-kindnet-517678
	I1014 14:49:36.769654   57628 main.go:141] libmachine: (kindnet-517678) DBG | unable to find current IP address of domain kindnet-517678 in network mk-kindnet-517678
	I1014 14:49:36.769687   57628 main.go:141] libmachine: (kindnet-517678) DBG | I1014 14:49:36.769635   57651 retry.go:31] will retry after 2.211303625s: waiting for machine to come up
	I1014 14:49:38.983787   57628 main.go:141] libmachine: (kindnet-517678) DBG | domain kindnet-517678 has defined MAC address 52:54:00:dd:c5:6d in network mk-kindnet-517678
	I1014 14:49:38.984217   57628 main.go:141] libmachine: (kindnet-517678) DBG | unable to find current IP address of domain kindnet-517678 in network mk-kindnet-517678
	I1014 14:49:38.984247   57628 main.go:141] libmachine: (kindnet-517678) DBG | I1014 14:49:38.984181   57651 retry.go:31] will retry after 3.450667792s: waiting for machine to come up
	I1014 14:49:41.871217   57174 api_server.go:279] https://192.168.50.21:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 14:49:41.871248   57174 api_server.go:103] status: https://192.168.50.21:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 14:49:41.871278   57174 api_server.go:253] Checking apiserver healthz at https://192.168.50.21:8443/healthz ...
	I1014 14:49:41.908282   57174 api_server.go:279] https://192.168.50.21:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 14:49:41.908307   57174 api_server.go:103] status: https://192.168.50.21:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 14:49:42.200745   57174 api_server.go:253] Checking apiserver healthz at https://192.168.50.21:8443/healthz ...
	I1014 14:49:42.209019   57174 api_server.go:279] https://192.168.50.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 14:49:42.209059   57174 api_server.go:103] status: https://192.168.50.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 14:49:42.700660   57174 api_server.go:253] Checking apiserver healthz at https://192.168.50.21:8443/healthz ...
	I1014 14:49:42.708668   57174 api_server.go:279] https://192.168.50.21:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 14:49:42.708693   57174 api_server.go:103] status: https://192.168.50.21:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 14:49:43.200341   57174 api_server.go:253] Checking apiserver healthz at https://192.168.50.21:8443/healthz ...
	I1014 14:49:43.205595   57174 api_server.go:279] https://192.168.50.21:8443/healthz returned 200:
	ok
	I1014 14:49:43.211885   57174 api_server.go:141] control plane version: v1.31.1
	I1014 14:49:43.211909   57174 api_server.go:131] duration metric: took 4.012040159s to wait for apiserver health ...
	I1014 14:49:43.211918   57174 cni.go:84] Creating CNI manager for ""
	I1014 14:49:43.211923   57174 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 14:49:43.214027   57174 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 14:49:43.216171   57174 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 14:49:43.229452   57174 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 14:49:43.248118   57174 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 14:49:43.248202   57174 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1014 14:49:43.248216   57174 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1014 14:49:43.258787   57174 system_pods.go:59] 8 kube-system pods found
	I1014 14:49:43.258820   57174 system_pods.go:61] "coredns-7c65d6cfc9-7vxmm" [a0f340c9-c730-417b-93d7-f9f59c28e812] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 14:49:43.258828   57174 system_pods.go:61] "coredns-7c65d6cfc9-mwwf2" [9fc5df64-f9f6-4d51-92a5-d8ab7aa30cb1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 14:49:43.258837   57174 system_pods.go:61] "etcd-kubernetes-upgrade-058309" [14eeb9ea-6f57-49ab-9773-0e02b67a8c0d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 14:49:43.258842   57174 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-058309" [03f00ebb-9e75-4611-a11f-626f85492e42] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 14:49:43.258849   57174 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-058309" [eb73cd03-6081-4598-b6a3-7ac232bcf62b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 14:49:43.258853   57174 system_pods.go:61] "kube-proxy-klr59" [30703798-788f-42c0-8206-69b63f835f5e] Running
	I1014 14:49:43.258858   57174 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-058309" [b1456882-7fc2-4d39-a300-23815237f153] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 14:49:43.258863   57174 system_pods.go:61] "storage-provisioner" [c1f8d0b8-cf29-453f-813e-490acd6a3801] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 14:49:43.258870   57174 system_pods.go:74] duration metric: took 10.731594ms to wait for pod list to return data ...
	I1014 14:49:43.258880   57174 node_conditions.go:102] verifying NodePressure condition ...
	I1014 14:49:43.263436   57174 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 14:49:43.263459   57174 node_conditions.go:123] node cpu capacity is 2
	I1014 14:49:43.263469   57174 node_conditions.go:105] duration metric: took 4.584573ms to run NodePressure ...
	I1014 14:49:43.263484   57174 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 14:49:43.590527   57174 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 14:49:43.603822   57174 ops.go:34] apiserver oom_adj: -16
	I1014 14:49:43.603849   57174 kubeadm.go:597] duration metric: took 22.780834822s to restartPrimaryControlPlane
	I1014 14:49:43.603860   57174 kubeadm.go:394] duration metric: took 23.072196355s to StartCluster
	I1014 14:49:43.603881   57174 settings.go:142] acquiring lock: {Name:mk9f6f6b9dc8c3435472077cbb9091b8e648d1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 14:49:43.603960   57174 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 14:49:43.605049   57174 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/kubeconfig: {Name:mk17dd47b52fd7a8ee17f563c35b22ef1b7788f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 14:49:43.605278   57174 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.21 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 14:49:43.605468   57174 config.go:182] Loaded profile config "kubernetes-upgrade-058309": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 14:49:43.605430   57174 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 14:49:43.605532   57174 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-058309"
	I1014 14:49:43.605552   57174 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-058309"
	I1014 14:49:43.605553   57174 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-058309"
	W1014 14:49:43.605560   57174 addons.go:243] addon storage-provisioner should already be in state true
	I1014 14:49:43.605576   57174 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-058309"
	I1014 14:49:43.605595   57174 host.go:66] Checking if "kubernetes-upgrade-058309" exists ...
	I1014 14:49:43.605965   57174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:49:43.605984   57174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:49:43.606004   57174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:49:43.606014   57174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:49:43.607997   57174 out.go:177] * Verifying Kubernetes components...
	I1014 14:49:43.609518   57174 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 14:49:43.621687   57174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41739
	I1014 14:49:43.622237   57174 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:49:43.622784   57174 main.go:141] libmachine: Using API Version  1
	I1014 14:49:43.622808   57174 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:49:43.623149   57174 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:49:43.623330   57174 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetState
	I1014 14:49:43.626119   57174 kapi.go:59] client config for kubernetes-upgrade-058309: &rest.Config{Host:"https://192.168.50.21:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kubernetes-upgrade-058309/client.crt", KeyFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kubernetes-upgrade-058309/client.key", CAFile:"/home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), C
AData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2432aa0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1014 14:49:43.626194   57174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38821
	I1014 14:49:43.626417   57174 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-058309"
	W1014 14:49:43.626436   57174 addons.go:243] addon default-storageclass should already be in state true
	I1014 14:49:43.626464   57174 host.go:66] Checking if "kubernetes-upgrade-058309" exists ...
	I1014 14:49:43.626706   57174 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:49:43.627044   57174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:49:43.627098   57174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:49:43.627183   57174 main.go:141] libmachine: Using API Version  1
	I1014 14:49:43.627204   57174 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:49:43.627602   57174 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:49:43.628176   57174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:49:43.628225   57174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:49:43.643580   57174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43629
	I1014 14:49:43.644043   57174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40285
	I1014 14:49:43.644077   57174 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:49:43.644411   57174 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:49:43.644784   57174 main.go:141] libmachine: Using API Version  1
	I1014 14:49:43.644801   57174 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:49:43.644925   57174 main.go:141] libmachine: Using API Version  1
	I1014 14:49:43.644937   57174 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:49:43.645128   57174 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:49:43.645191   57174 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:49:43.645364   57174 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetState
	I1014 14:49:43.645545   57174 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:49:43.645584   57174 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:49:43.647034   57174 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .DriverName
	I1014 14:49:43.654679   57174 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 14:49:43.656761   57174 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 14:49:43.656788   57174 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 14:49:43.656816   57174 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHHostname
	I1014 14:49:43.660523   57174 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:49:43.661014   57174 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:14:45", ip: ""} in network mk-kubernetes-upgrade-058309: {Iface:virbr2 ExpiryTime:2024-10-14 15:44:03 +0000 UTC Type:0 Mac:52:54:00:58:14:45 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:kubernetes-upgrade-058309 Clientid:01:52:54:00:58:14:45}
	I1014 14:49:43.661038   57174 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined IP address 192.168.50.21 and MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:49:43.661359   57174 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHPort
	I1014 14:49:43.661596   57174 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHKeyPath
	I1014 14:49:43.661764   57174 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHUsername
	I1014 14:49:43.661934   57174 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/kubernetes-upgrade-058309/id_rsa Username:docker}
	I1014 14:49:43.667898   57174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34823
	I1014 14:49:43.668494   57174 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:49:43.669122   57174 main.go:141] libmachine: Using API Version  1
	I1014 14:49:43.669146   57174 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:49:43.669553   57174 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:49:43.669751   57174 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetState
	I1014 14:49:43.671764   57174 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .DriverName
	I1014 14:49:43.671990   57174 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 14:49:43.672005   57174 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 14:49:43.672025   57174 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHHostname
	I1014 14:49:43.674912   57174 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:49:43.675358   57174 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:14:45", ip: ""} in network mk-kubernetes-upgrade-058309: {Iface:virbr2 ExpiryTime:2024-10-14 15:44:03 +0000 UTC Type:0 Mac:52:54:00:58:14:45 Iaid: IPaddr:192.168.50.21 Prefix:24 Hostname:kubernetes-upgrade-058309 Clientid:01:52:54:00:58:14:45}
	I1014 14:49:43.675383   57174 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | domain kubernetes-upgrade-058309 has defined IP address 192.168.50.21 and MAC address 52:54:00:58:14:45 in network mk-kubernetes-upgrade-058309
	I1014 14:49:43.675570   57174 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHPort
	I1014 14:49:43.675740   57174 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHKeyPath
	I1014 14:49:43.675895   57174 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .GetSSHUsername
	I1014 14:49:43.676042   57174 sshutil.go:53] new ssh client: &{IP:192.168.50.21 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/kubernetes-upgrade-058309/id_rsa Username:docker}
	I1014 14:49:43.803385   57174 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 14:49:43.822174   57174 api_server.go:52] waiting for apiserver process to appear ...
	I1014 14:49:43.822273   57174 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 14:49:43.836963   57174 api_server.go:72] duration metric: took 231.6552ms to wait for apiserver process to appear ...
	I1014 14:49:43.836994   57174 api_server.go:88] waiting for apiserver healthz status ...
	I1014 14:49:43.837015   57174 api_server.go:253] Checking apiserver healthz at https://192.168.50.21:8443/healthz ...
	I1014 14:49:43.841460   57174 api_server.go:279] https://192.168.50.21:8443/healthz returned 200:
	ok
	I1014 14:49:43.842410   57174 api_server.go:141] control plane version: v1.31.1
	I1014 14:49:43.842440   57174 api_server.go:131] duration metric: took 5.438027ms to wait for apiserver health ...
	I1014 14:49:43.842450   57174 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 14:49:43.849002   57174 system_pods.go:59] 8 kube-system pods found
	I1014 14:49:43.849029   57174 system_pods.go:61] "coredns-7c65d6cfc9-7vxmm" [a0f340c9-c730-417b-93d7-f9f59c28e812] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 14:49:43.849037   57174 system_pods.go:61] "coredns-7c65d6cfc9-mwwf2" [9fc5df64-f9f6-4d51-92a5-d8ab7aa30cb1] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 14:49:43.849045   57174 system_pods.go:61] "etcd-kubernetes-upgrade-058309" [14eeb9ea-6f57-49ab-9773-0e02b67a8c0d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 14:49:43.849053   57174 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-058309" [03f00ebb-9e75-4611-a11f-626f85492e42] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 14:49:43.849059   57174 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-058309" [eb73cd03-6081-4598-b6a3-7ac232bcf62b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 14:49:43.849063   57174 system_pods.go:61] "kube-proxy-klr59" [30703798-788f-42c0-8206-69b63f835f5e] Running
	I1014 14:49:43.849069   57174 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-058309" [b1456882-7fc2-4d39-a300-23815237f153] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 14:49:43.849072   57174 system_pods.go:61] "storage-provisioner" [c1f8d0b8-cf29-453f-813e-490acd6a3801] Running
	I1014 14:49:43.849078   57174 system_pods.go:74] duration metric: took 6.622645ms to wait for pod list to return data ...
	I1014 14:49:43.849090   57174 kubeadm.go:582] duration metric: took 243.786642ms to wait for: map[apiserver:true system_pods:true]
	I1014 14:49:43.849103   57174 node_conditions.go:102] verifying NodePressure condition ...
	I1014 14:49:43.852470   57174 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 14:49:43.852489   57174 node_conditions.go:123] node cpu capacity is 2
	I1014 14:49:43.852498   57174 node_conditions.go:105] duration metric: took 3.390923ms to run NodePressure ...
	I1014 14:49:43.852507   57174 start.go:241] waiting for startup goroutines ...
	I1014 14:49:43.950613   57174 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 14:49:43.982825   57174 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 14:49:44.717960   57174 main.go:141] libmachine: Making call to close driver server
	I1014 14:49:44.717992   57174 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .Close
	I1014 14:49:44.718086   57174 main.go:141] libmachine: Making call to close driver server
	I1014 14:49:44.718097   57174 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .Close
	I1014 14:49:44.718298   57174 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | Closing plugin on server side
	I1014 14:49:44.718340   57174 main.go:141] libmachine: Successfully made call to close driver server
	I1014 14:49:44.718351   57174 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 14:49:44.718366   57174 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | Closing plugin on server side
	I1014 14:49:44.718393   57174 main.go:141] libmachine: Making call to close driver server
	I1014 14:49:44.718413   57174 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .Close
	I1014 14:49:44.718443   57174 main.go:141] libmachine: Successfully made call to close driver server
	I1014 14:49:44.718465   57174 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 14:49:44.718482   57174 main.go:141] libmachine: Making call to close driver server
	I1014 14:49:44.718492   57174 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .Close
	I1014 14:49:44.721329   57174 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | Closing plugin on server side
	I1014 14:49:44.721334   57174 main.go:141] libmachine: Successfully made call to close driver server
	I1014 14:49:44.721341   57174 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | Closing plugin on server side
	I1014 14:49:44.721348   57174 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 14:49:44.721341   57174 main.go:141] libmachine: Successfully made call to close driver server
	I1014 14:49:44.721363   57174 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 14:49:44.730395   57174 main.go:141] libmachine: Making call to close driver server
	I1014 14:49:44.730412   57174 main.go:141] libmachine: (kubernetes-upgrade-058309) Calling .Close
	I1014 14:49:44.730731   57174 main.go:141] libmachine: Successfully made call to close driver server
	I1014 14:49:44.730754   57174 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 14:49:44.730738   57174 main.go:141] libmachine: (kubernetes-upgrade-058309) DBG | Closing plugin on server side
	I1014 14:49:44.733326   57174 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1014 14:49:44.734577   57174 addons.go:510] duration metric: took 1.129231029s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1014 14:49:44.734642   57174 start.go:246] waiting for cluster config update ...
	I1014 14:49:44.734657   57174 start.go:255] writing updated cluster config ...
	I1014 14:49:44.734954   57174 ssh_runner.go:195] Run: rm -f paused
	I1014 14:49:44.796368   57174 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1014 14:49:44.797909   57174 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-058309" cluster and "default" namespace by default
	I1014 14:49:42.436985   57628 main.go:141] libmachine: (kindnet-517678) DBG | domain kindnet-517678 has defined MAC address 52:54:00:dd:c5:6d in network mk-kindnet-517678
	I1014 14:49:42.437576   57628 main.go:141] libmachine: (kindnet-517678) DBG | unable to find current IP address of domain kindnet-517678 in network mk-kindnet-517678
	I1014 14:49:42.437598   57628 main.go:141] libmachine: (kindnet-517678) DBG | I1014 14:49:42.437532   57651 retry.go:31] will retry after 4.816161945s: waiting for machine to come up
	
	
	==> CRI-O <==
	Oct 14 14:49:45 kubernetes-upgrade-058309 crio[3000]: time="2024-10-14 14:49:45.697291170Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728917385697262158,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=97ed139a-3ee5-4750-8c21-ca23844cd1c4 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 14:49:45 kubernetes-upgrade-058309 crio[3000]: time="2024-10-14 14:49:45.697888022Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9da33681-0314-4159-be48-883c214b2170 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:49:45 kubernetes-upgrade-058309 crio[3000]: time="2024-10-14 14:49:45.697950662Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9da33681-0314-4159-be48-883c214b2170 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:49:45 kubernetes-upgrade-058309 crio[3000]: time="2024-10-14 14:49:45.698309590Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5aceeae4e0900ba27343f7f4ca7287baad0e1450dac852c17c2cfa6117505ad4,PodSandboxId:35d46f58c2b7f8f61681858731dc0cffeb350660274ad3bf0e1cdca692e638e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728917382444386557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mwwf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fc5df64-f9f6-4d51-92a5-d8ab7aa30cb1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60a70dadc1427d6c6ebe9d939c7eff9015ff955cbede4e4820b6381e0554bcf,PodSandboxId:8df42a5342194fe190958594f0331cfa3326219e58cd2ff600c75ddf22e73cef,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728917382462710543,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7vxmm,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a0f340c9-c730-417b-93d7-f9f59c28e812,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:448321172f8ff1f590f6bd2b85d9f5623539255dca715ee2ea4c13a32b5aff56,PodSandboxId:62f9466c26f2db357ba3acec397836a5deaaa6fb05c92065b8c11aa7b4f9de5a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1728917382418583252,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1f8d0b8-cf29-453f-813e-490acd6a3801,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbd5900fff6913d36e6abad9b4902bb4d5413fc09d63dd3179ca3206da8f12e8,PodSandboxId:bd61dacbb853b8f0ff7c677141f978c8fb3856726e2829d5437cf7d9e1f5d0d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNI
NG,CreatedAt:1728917378627009831,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-058309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9dabc69f52e0a78dc85fd6f7db79ee,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e473e65e4ebf1fc3053b341c7a9a6faefeb92b1c63bd50efceca48cc13251dd4,PodSandboxId:54d05ca76fa6e976148a379e8ae37ac873ad86bfbe63a011a15cf5c2ca6afaa4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_R
UNNING,CreatedAt:1728917378619197314,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-058309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baaaf35429abcb492ede7d23295c0d76,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa1293d129c59ed9114df06371d786c1d05aeeb355384914cae4d38b6eb5c89f,PodSandboxId:c86ebe1e020f8e9e46b347550af02a9481728b11fd4a090e91345f9bc0061cdc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINE
R_RUNNING,CreatedAt:1728917378647221290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-058309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f85edd2cb4d0d7db7e86c71d0d270bd3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0a354ffc708c2c65ec3c1aa1236b3bf52594a3f60c0d857fadc65c065b5b439,PodSandboxId:43d39b50122a7c000e6640e694d308469e79ef9ebbb729235dea310cb102bbe0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:17289
17378602813181,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-058309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fe09a269f4d5bd9fec6d848d3696ad1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89134f3282d1656982e8619a980fa156348484d927a578d8070d048cdc457407,PodSandboxId:73124168ef372ffa13dceab43c18c6626814e2e82d465cc99db0f1acfbd1a854,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:17289173752635
81615,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-klr59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30703798-788f-42c0-8206-69b63f835f5e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ab529b63edc11cfeecfa3f559f0be763330fb34abfdf0691523f2240ad201a,PodSandboxId:8df42a5342194fe190958594f0331cfa3326219e58cd2ff600c75ddf22e73cef,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728917360629898825,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7vxmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0f340c9-c730-417b-93d7-f9f59c28e812,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b6b5c13e23378728d84e7abf3db40ff888ef1a8421a5ca4484b540657417b2,PodSandboxId:35d46f58c2b7f8f61681858731dc0cffeb350660274ad3bf0e1cdca692e638e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728917360543983436,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mwwf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fc5df64-f9f6-4d51-92a5-d8ab7aa30cb1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c832f52fe15e014ec29af0e3a8bc968e6e4dcf1c81f16101fee0bfdaa98ec9,PodSandboxId:13ff55f1a2cf2a3034aa5e20cb4265340d5eed27aa0466
be8e07a224f695dba5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728917357138970783,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-klr59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30703798-788f-42c0-8206-69b63f835f5e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71e048c416be0baa18fe4e008b5529914d01f0e7f2a2ecbd4a55ed8905bb1ab2,PodSandboxId:09dc5a7e969e6f5da154755a21bcc5f61d4505bfe515032c5142fc337efcb8e5,Metadata:&Conta
inerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728917357233723717,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-058309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9dabc69f52e0a78dc85fd6f7db79ee,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9929f871b60ebe0d0932b286f86502ec9d925f728aa785166ffa93c3bb63fc2c,PodSandboxId:10f7b757c97eb9837c4af114c3bf4c9e9692a1bffececa4edd5e41927525092d,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728917357124764079,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-058309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f85edd2cb4d0d7db7e86c71d0d270bd3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23448dc575bae199f2ab87412c9b266c124ca3e69a232778dbf7c2ca241ef5e6,PodSandboxId:f0fd0f1338ae8a891c5241a7be462567a77ae1ca13c70f04c78737e8e40d4e56,Metadata:&ContainerMetadata{Name:kube-controller-manager,
Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728917357082265351,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-058309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baaaf35429abcb492ede7d23295c0d76,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50a9a22a0b9d4e486cb7b05d7c68ab4396b7ee2510b01d194b01824b8379572e,PodSandboxId:c0b5099ba0b87360343d9c0fd839bdfb10fad0cc1796b2b233098b7f05f4d17c,Metadata:&ContainerMetadata{Name:kub
e-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728917357044118052,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-058309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fe09a269f4d5bd9fec6d848d3696ad1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b79910e970381ebcf1fa6fea1502c0689d8653ba6d32b950aac7209a7f098bd3,PodSandboxId:a71cf1bb3f4c65ad29c38380800f1d118372aa887324c2064a4f3613b7e494ff,Metadata:&ContainerMetadata{Name:storage-p
rovisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728917356976311903,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1f8d0b8-cf29-453f-813e-490acd6a3801,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9da33681-0314-4159-be48-883c214b2170 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:49:45 kubernetes-upgrade-058309 crio[3000]: time="2024-10-14 14:49:45.754043965Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fceeb5a3-a18e-444d-ab46-e92711013957 name=/runtime.v1.RuntimeService/Version
	Oct 14 14:49:45 kubernetes-upgrade-058309 crio[3000]: time="2024-10-14 14:49:45.754119956Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fceeb5a3-a18e-444d-ab46-e92711013957 name=/runtime.v1.RuntimeService/Version
	Oct 14 14:49:45 kubernetes-upgrade-058309 crio[3000]: time="2024-10-14 14:49:45.755546719Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f77ba809-bc78-4f82-87c4-1224e5643349 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 14:49:45 kubernetes-upgrade-058309 crio[3000]: time="2024-10-14 14:49:45.756043049Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728917385756013614,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f77ba809-bc78-4f82-87c4-1224e5643349 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 14:49:45 kubernetes-upgrade-058309 crio[3000]: time="2024-10-14 14:49:45.756609732Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b57177c1-d365-4beb-b914-58970643e0eb name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:49:45 kubernetes-upgrade-058309 crio[3000]: time="2024-10-14 14:49:45.756695971Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b57177c1-d365-4beb-b914-58970643e0eb name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:49:45 kubernetes-upgrade-058309 crio[3000]: time="2024-10-14 14:49:45.757232666Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5aceeae4e0900ba27343f7f4ca7287baad0e1450dac852c17c2cfa6117505ad4,PodSandboxId:35d46f58c2b7f8f61681858731dc0cffeb350660274ad3bf0e1cdca692e638e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728917382444386557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mwwf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fc5df64-f9f6-4d51-92a5-d8ab7aa30cb1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60a70dadc1427d6c6ebe9d939c7eff9015ff955cbede4e4820b6381e0554bcf,PodSandboxId:8df42a5342194fe190958594f0331cfa3326219e58cd2ff600c75ddf22e73cef,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728917382462710543,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7vxmm,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a0f340c9-c730-417b-93d7-f9f59c28e812,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:448321172f8ff1f590f6bd2b85d9f5623539255dca715ee2ea4c13a32b5aff56,PodSandboxId:62f9466c26f2db357ba3acec397836a5deaaa6fb05c92065b8c11aa7b4f9de5a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1728917382418583252,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1f8d0b8-cf29-453f-813e-490acd6a3801,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbd5900fff6913d36e6abad9b4902bb4d5413fc09d63dd3179ca3206da8f12e8,PodSandboxId:bd61dacbb853b8f0ff7c677141f978c8fb3856726e2829d5437cf7d9e1f5d0d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNI
NG,CreatedAt:1728917378627009831,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-058309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9dabc69f52e0a78dc85fd6f7db79ee,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e473e65e4ebf1fc3053b341c7a9a6faefeb92b1c63bd50efceca48cc13251dd4,PodSandboxId:54d05ca76fa6e976148a379e8ae37ac873ad86bfbe63a011a15cf5c2ca6afaa4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_R
UNNING,CreatedAt:1728917378619197314,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-058309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baaaf35429abcb492ede7d23295c0d76,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa1293d129c59ed9114df06371d786c1d05aeeb355384914cae4d38b6eb5c89f,PodSandboxId:c86ebe1e020f8e9e46b347550af02a9481728b11fd4a090e91345f9bc0061cdc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINE
R_RUNNING,CreatedAt:1728917378647221290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-058309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f85edd2cb4d0d7db7e86c71d0d270bd3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0a354ffc708c2c65ec3c1aa1236b3bf52594a3f60c0d857fadc65c065b5b439,PodSandboxId:43d39b50122a7c000e6640e694d308469e79ef9ebbb729235dea310cb102bbe0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:17289
17378602813181,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-058309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fe09a269f4d5bd9fec6d848d3696ad1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89134f3282d1656982e8619a980fa156348484d927a578d8070d048cdc457407,PodSandboxId:73124168ef372ffa13dceab43c18c6626814e2e82d465cc99db0f1acfbd1a854,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:17289173752635
81615,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-klr59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30703798-788f-42c0-8206-69b63f835f5e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ab529b63edc11cfeecfa3f559f0be763330fb34abfdf0691523f2240ad201a,PodSandboxId:8df42a5342194fe190958594f0331cfa3326219e58cd2ff600c75ddf22e73cef,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728917360629898825,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7vxmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0f340c9-c730-417b-93d7-f9f59c28e812,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b6b5c13e23378728d84e7abf3db40ff888ef1a8421a5ca4484b540657417b2,PodSandboxId:35d46f58c2b7f8f61681858731dc0cffeb350660274ad3bf0e1cdca692e638e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728917360543983436,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mwwf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fc5df64-f9f6-4d51-92a5-d8ab7aa30cb1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c832f52fe15e014ec29af0e3a8bc968e6e4dcf1c81f16101fee0bfdaa98ec9,PodSandboxId:13ff55f1a2cf2a3034aa5e20cb4265340d5eed27aa0466
be8e07a224f695dba5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728917357138970783,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-klr59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30703798-788f-42c0-8206-69b63f835f5e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71e048c416be0baa18fe4e008b5529914d01f0e7f2a2ecbd4a55ed8905bb1ab2,PodSandboxId:09dc5a7e969e6f5da154755a21bcc5f61d4505bfe515032c5142fc337efcb8e5,Metadata:&Conta
inerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728917357233723717,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-058309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9dabc69f52e0a78dc85fd6f7db79ee,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9929f871b60ebe0d0932b286f86502ec9d925f728aa785166ffa93c3bb63fc2c,PodSandboxId:10f7b757c97eb9837c4af114c3bf4c9e9692a1bffececa4edd5e41927525092d,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728917357124764079,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-058309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f85edd2cb4d0d7db7e86c71d0d270bd3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23448dc575bae199f2ab87412c9b266c124ca3e69a232778dbf7c2ca241ef5e6,PodSandboxId:f0fd0f1338ae8a891c5241a7be462567a77ae1ca13c70f04c78737e8e40d4e56,Metadata:&ContainerMetadata{Name:kube-controller-manager,
Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728917357082265351,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-058309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baaaf35429abcb492ede7d23295c0d76,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50a9a22a0b9d4e486cb7b05d7c68ab4396b7ee2510b01d194b01824b8379572e,PodSandboxId:c0b5099ba0b87360343d9c0fd839bdfb10fad0cc1796b2b233098b7f05f4d17c,Metadata:&ContainerMetadata{Name:kub
e-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728917357044118052,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-058309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fe09a269f4d5bd9fec6d848d3696ad1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b79910e970381ebcf1fa6fea1502c0689d8653ba6d32b950aac7209a7f098bd3,PodSandboxId:a71cf1bb3f4c65ad29c38380800f1d118372aa887324c2064a4f3613b7e494ff,Metadata:&ContainerMetadata{Name:storage-p
rovisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728917356976311903,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1f8d0b8-cf29-453f-813e-490acd6a3801,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b57177c1-d365-4beb-b914-58970643e0eb name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:49:45 kubernetes-upgrade-058309 crio[3000]: time="2024-10-14 14:49:45.807209425Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a2aec8b0-4ce4-4e00-90c8-573089f7e36b name=/runtime.v1.RuntimeService/Version
	Oct 14 14:49:45 kubernetes-upgrade-058309 crio[3000]: time="2024-10-14 14:49:45.807287329Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a2aec8b0-4ce4-4e00-90c8-573089f7e36b name=/runtime.v1.RuntimeService/Version
	Oct 14 14:49:45 kubernetes-upgrade-058309 crio[3000]: time="2024-10-14 14:49:45.808224330Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c06c5400-e469-4345-bc32-5adbefd5e3d5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 14:49:45 kubernetes-upgrade-058309 crio[3000]: time="2024-10-14 14:49:45.808692979Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728917385808666602,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c06c5400-e469-4345-bc32-5adbefd5e3d5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 14:49:45 kubernetes-upgrade-058309 crio[3000]: time="2024-10-14 14:49:45.809166335Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=86a6cd38-cc8b-4f4c-a2b4-8a7f2c0ed387 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:49:45 kubernetes-upgrade-058309 crio[3000]: time="2024-10-14 14:49:45.809223110Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=86a6cd38-cc8b-4f4c-a2b4-8a7f2c0ed387 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:49:45 kubernetes-upgrade-058309 crio[3000]: time="2024-10-14 14:49:45.809744193Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5aceeae4e0900ba27343f7f4ca7287baad0e1450dac852c17c2cfa6117505ad4,PodSandboxId:35d46f58c2b7f8f61681858731dc0cffeb350660274ad3bf0e1cdca692e638e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728917382444386557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mwwf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fc5df64-f9f6-4d51-92a5-d8ab7aa30cb1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60a70dadc1427d6c6ebe9d939c7eff9015ff955cbede4e4820b6381e0554bcf,PodSandboxId:8df42a5342194fe190958594f0331cfa3326219e58cd2ff600c75ddf22e73cef,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728917382462710543,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7vxmm,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a0f340c9-c730-417b-93d7-f9f59c28e812,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:448321172f8ff1f590f6bd2b85d9f5623539255dca715ee2ea4c13a32b5aff56,PodSandboxId:62f9466c26f2db357ba3acec397836a5deaaa6fb05c92065b8c11aa7b4f9de5a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1728917382418583252,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1f8d0b8-cf29-453f-813e-490acd6a3801,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbd5900fff6913d36e6abad9b4902bb4d5413fc09d63dd3179ca3206da8f12e8,PodSandboxId:bd61dacbb853b8f0ff7c677141f978c8fb3856726e2829d5437cf7d9e1f5d0d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNI
NG,CreatedAt:1728917378627009831,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-058309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9dabc69f52e0a78dc85fd6f7db79ee,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e473e65e4ebf1fc3053b341c7a9a6faefeb92b1c63bd50efceca48cc13251dd4,PodSandboxId:54d05ca76fa6e976148a379e8ae37ac873ad86bfbe63a011a15cf5c2ca6afaa4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_R
UNNING,CreatedAt:1728917378619197314,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-058309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baaaf35429abcb492ede7d23295c0d76,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa1293d129c59ed9114df06371d786c1d05aeeb355384914cae4d38b6eb5c89f,PodSandboxId:c86ebe1e020f8e9e46b347550af02a9481728b11fd4a090e91345f9bc0061cdc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINE
R_RUNNING,CreatedAt:1728917378647221290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-058309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f85edd2cb4d0d7db7e86c71d0d270bd3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0a354ffc708c2c65ec3c1aa1236b3bf52594a3f60c0d857fadc65c065b5b439,PodSandboxId:43d39b50122a7c000e6640e694d308469e79ef9ebbb729235dea310cb102bbe0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:17289
17378602813181,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-058309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fe09a269f4d5bd9fec6d848d3696ad1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89134f3282d1656982e8619a980fa156348484d927a578d8070d048cdc457407,PodSandboxId:73124168ef372ffa13dceab43c18c6626814e2e82d465cc99db0f1acfbd1a854,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:17289173752635
81615,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-klr59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30703798-788f-42c0-8206-69b63f835f5e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ab529b63edc11cfeecfa3f559f0be763330fb34abfdf0691523f2240ad201a,PodSandboxId:8df42a5342194fe190958594f0331cfa3326219e58cd2ff600c75ddf22e73cef,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728917360629898825,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7vxmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0f340c9-c730-417b-93d7-f9f59c28e812,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b6b5c13e23378728d84e7abf3db40ff888ef1a8421a5ca4484b540657417b2,PodSandboxId:35d46f58c2b7f8f61681858731dc0cffeb350660274ad3bf0e1cdca692e638e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728917360543983436,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mwwf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fc5df64-f9f6-4d51-92a5-d8ab7aa30cb1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c832f52fe15e014ec29af0e3a8bc968e6e4dcf1c81f16101fee0bfdaa98ec9,PodSandboxId:13ff55f1a2cf2a3034aa5e20cb4265340d5eed27aa0466
be8e07a224f695dba5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728917357138970783,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-klr59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30703798-788f-42c0-8206-69b63f835f5e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71e048c416be0baa18fe4e008b5529914d01f0e7f2a2ecbd4a55ed8905bb1ab2,PodSandboxId:09dc5a7e969e6f5da154755a21bcc5f61d4505bfe515032c5142fc337efcb8e5,Metadata:&Conta
inerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728917357233723717,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-058309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9dabc69f52e0a78dc85fd6f7db79ee,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9929f871b60ebe0d0932b286f86502ec9d925f728aa785166ffa93c3bb63fc2c,PodSandboxId:10f7b757c97eb9837c4af114c3bf4c9e9692a1bffececa4edd5e41927525092d,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728917357124764079,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-058309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f85edd2cb4d0d7db7e86c71d0d270bd3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23448dc575bae199f2ab87412c9b266c124ca3e69a232778dbf7c2ca241ef5e6,PodSandboxId:f0fd0f1338ae8a891c5241a7be462567a77ae1ca13c70f04c78737e8e40d4e56,Metadata:&ContainerMetadata{Name:kube-controller-manager,
Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728917357082265351,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-058309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baaaf35429abcb492ede7d23295c0d76,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50a9a22a0b9d4e486cb7b05d7c68ab4396b7ee2510b01d194b01824b8379572e,PodSandboxId:c0b5099ba0b87360343d9c0fd839bdfb10fad0cc1796b2b233098b7f05f4d17c,Metadata:&ContainerMetadata{Name:kub
e-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728917357044118052,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-058309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fe09a269f4d5bd9fec6d848d3696ad1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b79910e970381ebcf1fa6fea1502c0689d8653ba6d32b950aac7209a7f098bd3,PodSandboxId:a71cf1bb3f4c65ad29c38380800f1d118372aa887324c2064a4f3613b7e494ff,Metadata:&ContainerMetadata{Name:storage-p
rovisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728917356976311903,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1f8d0b8-cf29-453f-813e-490acd6a3801,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=86a6cd38-cc8b-4f4c-a2b4-8a7f2c0ed387 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:49:45 kubernetes-upgrade-058309 crio[3000]: time="2024-10-14 14:49:45.855501978Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ca7293d9-cacb-4b55-bf47-26ea16b25fd3 name=/runtime.v1.RuntimeService/Version
	Oct 14 14:49:45 kubernetes-upgrade-058309 crio[3000]: time="2024-10-14 14:49:45.855614382Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ca7293d9-cacb-4b55-bf47-26ea16b25fd3 name=/runtime.v1.RuntimeService/Version
	Oct 14 14:49:45 kubernetes-upgrade-058309 crio[3000]: time="2024-10-14 14:49:45.856862616Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=886af619-8f7c-410a-b269-7f57af00d83c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 14:49:45 kubernetes-upgrade-058309 crio[3000]: time="2024-10-14 14:49:45.857236356Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728917385857210624,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125697,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=886af619-8f7c-410a-b269-7f57af00d83c name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 14:49:45 kubernetes-upgrade-058309 crio[3000]: time="2024-10-14 14:49:45.857747608Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0722ac9a-ddbf-4803-993a-401d026e4ce1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:49:45 kubernetes-upgrade-058309 crio[3000]: time="2024-10-14 14:49:45.857801042Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0722ac9a-ddbf-4803-993a-401d026e4ce1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 14:49:45 kubernetes-upgrade-058309 crio[3000]: time="2024-10-14 14:49:45.858186348Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5aceeae4e0900ba27343f7f4ca7287baad0e1450dac852c17c2cfa6117505ad4,PodSandboxId:35d46f58c2b7f8f61681858731dc0cffeb350660274ad3bf0e1cdca692e638e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728917382444386557,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mwwf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fc5df64-f9f6-4d51-92a5-d8ab7aa30cb1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60a70dadc1427d6c6ebe9d939c7eff9015ff955cbede4e4820b6381e0554bcf,PodSandboxId:8df42a5342194fe190958594f0331cfa3326219e58cd2ff600c75ddf22e73cef,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728917382462710543,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7vxmm,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: a0f340c9-c730-417b-93d7-f9f59c28e812,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:448321172f8ff1f590f6bd2b85d9f5623539255dca715ee2ea4c13a32b5aff56,PodSandboxId:62f9466c26f2db357ba3acec397836a5deaaa6fb05c92065b8c11aa7b4f9de5a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Stat
e:CONTAINER_RUNNING,CreatedAt:1728917382418583252,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1f8d0b8-cf29-453f-813e-490acd6a3801,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbd5900fff6913d36e6abad9b4902bb4d5413fc09d63dd3179ca3206da8f12e8,PodSandboxId:bd61dacbb853b8f0ff7c677141f978c8fb3856726e2829d5437cf7d9e1f5d0d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNI
NG,CreatedAt:1728917378627009831,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-058309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9dabc69f52e0a78dc85fd6f7db79ee,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e473e65e4ebf1fc3053b341c7a9a6faefeb92b1c63bd50efceca48cc13251dd4,PodSandboxId:54d05ca76fa6e976148a379e8ae37ac873ad86bfbe63a011a15cf5c2ca6afaa4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_R
UNNING,CreatedAt:1728917378619197314,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-058309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baaaf35429abcb492ede7d23295c0d76,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa1293d129c59ed9114df06371d786c1d05aeeb355384914cae4d38b6eb5c89f,PodSandboxId:c86ebe1e020f8e9e46b347550af02a9481728b11fd4a090e91345f9bc0061cdc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINE
R_RUNNING,CreatedAt:1728917378647221290,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-058309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f85edd2cb4d0d7db7e86c71d0d270bd3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0a354ffc708c2c65ec3c1aa1236b3bf52594a3f60c0d857fadc65c065b5b439,PodSandboxId:43d39b50122a7c000e6640e694d308469e79ef9ebbb729235dea310cb102bbe0,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:17289
17378602813181,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-058309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fe09a269f4d5bd9fec6d848d3696ad1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89134f3282d1656982e8619a980fa156348484d927a578d8070d048cdc457407,PodSandboxId:73124168ef372ffa13dceab43c18c6626814e2e82d465cc99db0f1acfbd1a854,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:17289173752635
81615,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-klr59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30703798-788f-42c0-8206-69b63f835f5e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ab529b63edc11cfeecfa3f559f0be763330fb34abfdf0691523f2240ad201a,PodSandboxId:8df42a5342194fe190958594f0331cfa3326219e58cd2ff600c75ddf22e73cef,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728917360629898825,Labels:map[string]string{io.kub
ernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7vxmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0f340c9-c730-417b-93d7-f9f59c28e812,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d7b6b5c13e23378728d84e7abf3db40ff888ef1a8421a5ca4484b540657417b2,PodSandboxId:35d46f58c2b7f8f61681858731dc0cffeb350660274ad3bf0e1cdca692e638e6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},Us
erSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1728917360543983436,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mwwf2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9fc5df64-f9f6-4d51-92a5-d8ab7aa30cb1,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d5c832f52fe15e014ec29af0e3a8bc968e6e4dcf1c81f16101fee0bfdaa98ec9,PodSandboxId:13ff55f1a2cf2a3034aa5e20cb4265340d5eed27aa0466
be8e07a224f695dba5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_EXITED,CreatedAt:1728917357138970783,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-klr59,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30703798-788f-42c0-8206-69b63f835f5e,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71e048c416be0baa18fe4e008b5529914d01f0e7f2a2ecbd4a55ed8905bb1ab2,PodSandboxId:09dc5a7e969e6f5da154755a21bcc5f61d4505bfe515032c5142fc337efcb8e5,Metadata:&Conta
inerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_EXITED,CreatedAt:1728917357233723717,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-058309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c9dabc69f52e0a78dc85fd6f7db79ee,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9929f871b60ebe0d0932b286f86502ec9d925f728aa785166ffa93c3bb63fc2c,PodSandboxId:10f7b757c97eb9837c4af114c3bf4c9e9692a1bffececa4edd5e41927525092d,Metadata:&ContainerMe
tadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1728917357124764079,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-058309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f85edd2cb4d0d7db7e86c71d0d270bd3,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23448dc575bae199f2ab87412c9b266c124ca3e69a232778dbf7c2ca241ef5e6,PodSandboxId:f0fd0f1338ae8a891c5241a7be462567a77ae1ca13c70f04c78737e8e40d4e56,Metadata:&ContainerMetadata{Name:kube-controller-manager,
Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_EXITED,CreatedAt:1728917357082265351,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-058309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: baaaf35429abcb492ede7d23295c0d76,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50a9a22a0b9d4e486cb7b05d7c68ab4396b7ee2510b01d194b01824b8379572e,PodSandboxId:c0b5099ba0b87360343d9c0fd839bdfb10fad0cc1796b2b233098b7f05f4d17c,Metadata:&ContainerMetadata{Name:kub
e-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728917357044118052,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-058309,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2fe09a269f4d5bd9fec6d848d3696ad1,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b79910e970381ebcf1fa6fea1502c0689d8653ba6d32b950aac7209a7f098bd3,PodSandboxId:a71cf1bb3f4c65ad29c38380800f1d118372aa887324c2064a4f3613b7e494ff,Metadata:&ContainerMetadata{Name:storage-p
rovisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728917356976311903,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c1f8d0b8-cf29-453f-813e-490acd6a3801,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0722ac9a-ddbf-4803-993a-401d026e4ce1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f60a70dadc142       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 seconds ago       Running             coredns                   2                   8df42a5342194       coredns-7c65d6cfc9-7vxmm
	5aceeae4e0900       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   3 seconds ago       Running             coredns                   2                   35d46f58c2b7f       coredns-7c65d6cfc9-mwwf2
	448321172f8ff       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       2                   62f9466c26f2d       storage-provisioner
	fa1293d129c59       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   7 seconds ago       Running             etcd                      2                   c86ebe1e020f8       etcd-kubernetes-upgrade-058309
	dbd5900fff691       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   7 seconds ago       Running             kube-scheduler            2                   bd61dacbb853b       kube-scheduler-kubernetes-upgrade-058309
	e473e65e4ebf1       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   7 seconds ago       Running             kube-controller-manager   2                   54d05ca76fa6e       kube-controller-manager-kubernetes-upgrade-058309
	c0a354ffc708c       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   7 seconds ago       Running             kube-apiserver            2                   43d39b50122a7       kube-apiserver-kubernetes-upgrade-058309
	89134f3282d16       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   10 seconds ago      Running             kube-proxy                2                   73124168ef372       kube-proxy-klr59
	88ab529b63edc       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   25 seconds ago      Exited              coredns                   1                   8df42a5342194       coredns-7c65d6cfc9-7vxmm
	d7b6b5c13e233       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   25 seconds ago      Exited              coredns                   1                   35d46f58c2b7f       coredns-7c65d6cfc9-mwwf2
	71e048c416be0       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   28 seconds ago      Exited              kube-scheduler            1                   09dc5a7e969e6       kube-scheduler-kubernetes-upgrade-058309
	d5c832f52fe15       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   28 seconds ago      Exited              kube-proxy                1                   13ff55f1a2cf2       kube-proxy-klr59
	9929f871b60eb       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   28 seconds ago      Exited              etcd                      1                   10f7b757c97eb       etcd-kubernetes-upgrade-058309
	23448dc575bae       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   28 seconds ago      Exited              kube-controller-manager   1                   f0fd0f1338ae8       kube-controller-manager-kubernetes-upgrade-058309
	50a9a22a0b9d4       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   28 seconds ago      Exited              kube-apiserver            1                   c0b5099ba0b87       kube-apiserver-kubernetes-upgrade-058309
	b79910e970381       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   28 seconds ago      Exited              storage-provisioner       1                   a71cf1bb3f4c6       storage-provisioner
	
	
	==> coredns [5aceeae4e0900ba27343f7f4ca7287baad0e1450dac852c17c2cfa6117505ad4] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [88ab529b63edc11cfeecfa3f559f0be763330fb34abfdf0691523f2240ad201a] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d7b6b5c13e23378728d84e7abf3db40ff888ef1a8421a5ca4484b540657417b2] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f60a70dadc1427d6c6ebe9d939c7eff9015ff955cbede4e4820b6381e0554bcf] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-058309
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-058309
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 14:48:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-058309
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 14:49:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 14:49:42 +0000   Mon, 14 Oct 2024 14:48:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 14:49:42 +0000   Mon, 14 Oct 2024 14:48:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 14:49:42 +0000   Mon, 14 Oct 2024 14:48:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 14:49:42 +0000   Mon, 14 Oct 2024 14:48:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.21
	  Hostname:    kubernetes-upgrade-058309
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3801186e4153428685b0b12072e581dc
	  System UUID:                3801186e-4153-4286-85b0-b12072e581dc
	  Boot ID:                    4d51eec6-cddb-475e-a3be-1f4ddcbbf6e4
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-7vxmm                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     44s
	  kube-system                 coredns-7c65d6cfc9-mwwf2                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     44s
	  kube-system                 etcd-kubernetes-upgrade-058309                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         49s
	  kube-system                 kube-apiserver-kubernetes-upgrade-058309             250m (12%)    0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-058309    200m (10%)    0 (0%)      0 (0%)           0 (0%)         45s
	  kube-system                 kube-proxy-klr59                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 kube-scheduler-kubernetes-upgrade-058309             100m (5%)     0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 41s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  56s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  56s (x8 over 56s)  kubelet          Node kubernetes-upgrade-058309 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)  kubelet          Node kubernetes-upgrade-058309 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x7 over 56s)  kubelet          Node kubernetes-upgrade-058309 status is now: NodeHasSufficientPID
	  Normal  Starting                 56s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           45s                node-controller  Node kubernetes-upgrade-058309 event: Registered Node kubernetes-upgrade-058309 in Controller
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-058309 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 8s)    kubelet          Node kubernetes-upgrade-058309 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 8s)    kubelet          Node kubernetes-upgrade-058309 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                 node-controller  Node kubernetes-upgrade-058309 event: Registered Node kubernetes-upgrade-058309 in Controller
	
	
	==> dmesg <==
	[  +2.543076] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +4.989774] systemd-fstab-generator[554]: Ignoring "noauto" option for root device
	[  +0.088650] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066830] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.175269] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.155180] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.297011] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +4.398247] systemd-fstab-generator[713]: Ignoring "noauto" option for root device
	[  +0.066627] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.173837] systemd-fstab-generator[836]: Ignoring "noauto" option for root device
	[Oct14 14:49] systemd-fstab-generator[1210]: Ignoring "noauto" option for root device
	[  +0.111972] kauditd_printk_skb: 97 callbacks suppressed
	[ +13.001368] systemd-fstab-generator[2166]: Ignoring "noauto" option for root device
	[  +0.079841] kauditd_printk_skb: 105 callbacks suppressed
	[  +0.074639] systemd-fstab-generator[2178]: Ignoring "noauto" option for root device
	[  +0.284242] systemd-fstab-generator[2208]: Ignoring "noauto" option for root device
	[  +0.226813] systemd-fstab-generator[2310]: Ignoring "noauto" option for root device
	[  +1.317818] systemd-fstab-generator[2876]: Ignoring "noauto" option for root device
	[  +1.184834] systemd-fstab-generator[3207]: Ignoring "noauto" option for root device
	[ +12.705549] kauditd_printk_skb: 300 callbacks suppressed
	[  +5.922770] systemd-fstab-generator[3989]: Ignoring "noauto" option for root device
	[  +5.518483] kauditd_printk_skb: 58 callbacks suppressed
	[  +0.279103] systemd-fstab-generator[4480]: Ignoring "noauto" option for root device
	
	
	==> etcd [9929f871b60ebe0d0932b286f86502ec9d925f728aa785166ffa93c3bb63fc2c] <==
	{"level":"info","ts":"2024-10-14T14:49:17.854980Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-10-14T14:49:17.889200Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"6f04757488c993a3","local-member-id":"6b85f157810fe4ab","commit-index":385}
	{"level":"info","ts":"2024-10-14T14:49:17.889456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b85f157810fe4ab switched to configuration voters=()"}
	{"level":"info","ts":"2024-10-14T14:49:17.889592Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b85f157810fe4ab became follower at term 2"}
	{"level":"info","ts":"2024-10-14T14:49:17.889605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 6b85f157810fe4ab [peers: [], term: 2, commit: 385, applied: 0, lastindex: 385, lastterm: 2]"}
	{"level":"warn","ts":"2024-10-14T14:49:17.900007Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2024-10-14T14:49:17.927150Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":377}
	{"level":"info","ts":"2024-10-14T14:49:17.935446Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2024-10-14T14:49:17.942245Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"6b85f157810fe4ab","timeout":"7s"}
	{"level":"info","ts":"2024-10-14T14:49:17.946991Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"6b85f157810fe4ab"}
	{"level":"info","ts":"2024-10-14T14:49:17.947052Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"6b85f157810fe4ab","local-server-version":"3.5.15","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-10-14T14:49:17.949529Z","caller":"etcdserver/server.go:767","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-10-14T14:49:17.949676Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-14T14:49:17.949720Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-14T14:49:17.949733Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-14T14:49:17.950053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b85f157810fe4ab switched to configuration voters=(7747864092090557611)"}
	{"level":"info","ts":"2024-10-14T14:49:17.950099Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f04757488c993a3","local-member-id":"6b85f157810fe4ab","added-peer-id":"6b85f157810fe4ab","added-peer-peer-urls":["https://192.168.50.21:2380"]}
	{"level":"info","ts":"2024-10-14T14:49:17.952224Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f04757488c993a3","local-member-id":"6b85f157810fe4ab","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T14:49:17.952271Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T14:49:17.953537Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-14T14:49:17.999030Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-14T14:49:18.019253Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"6b85f157810fe4ab","initial-advertise-peer-urls":["https://192.168.50.21:2380"],"listen-peer-urls":["https://192.168.50.21:2380"],"advertise-client-urls":["https://192.168.50.21:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.21:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-14T14:49:18.043525Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-14T14:49:18.000014Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.21:2380"}
	{"level":"info","ts":"2024-10-14T14:49:18.043606Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.21:2380"}
	
	
	==> etcd [fa1293d129c59ed9114df06371d786c1d05aeeb355384914cae4d38b6eb5c89f] <==
	{"level":"info","ts":"2024-10-14T14:49:39.019596Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b85f157810fe4ab switched to configuration voters=(7747864092090557611)"}
	{"level":"info","ts":"2024-10-14T14:49:39.019751Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f04757488c993a3","local-member-id":"6b85f157810fe4ab","added-peer-id":"6b85f157810fe4ab","added-peer-peer-urls":["https://192.168.50.21:2380"]}
	{"level":"info","ts":"2024-10-14T14:49:39.019960Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f04757488c993a3","local-member-id":"6b85f157810fe4ab","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T14:49:39.021577Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T14:49:39.020089Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-14T14:49:39.028080Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"6b85f157810fe4ab","initial-advertise-peer-urls":["https://192.168.50.21:2380"],"listen-peer-urls":["https://192.168.50.21:2380"],"advertise-client-urls":["https://192.168.50.21:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.21:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-14T14:49:39.028149Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-14T14:49:39.020114Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.21:2380"}
	{"level":"info","ts":"2024-10-14T14:49:39.028225Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.21:2380"}
	{"level":"info","ts":"2024-10-14T14:49:40.465862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b85f157810fe4ab is starting a new election at term 2"}
	{"level":"info","ts":"2024-10-14T14:49:40.465985Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b85f157810fe4ab became pre-candidate at term 2"}
	{"level":"info","ts":"2024-10-14T14:49:40.466042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b85f157810fe4ab received MsgPreVoteResp from 6b85f157810fe4ab at term 2"}
	{"level":"info","ts":"2024-10-14T14:49:40.466098Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b85f157810fe4ab became candidate at term 3"}
	{"level":"info","ts":"2024-10-14T14:49:40.466123Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b85f157810fe4ab received MsgVoteResp from 6b85f157810fe4ab at term 3"}
	{"level":"info","ts":"2024-10-14T14:49:40.466150Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b85f157810fe4ab became leader at term 3"}
	{"level":"info","ts":"2024-10-14T14:49:40.466176Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b85f157810fe4ab elected leader 6b85f157810fe4ab at term 3"}
	{"level":"info","ts":"2024-10-14T14:49:40.468982Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"6b85f157810fe4ab","local-member-attributes":"{Name:kubernetes-upgrade-058309 ClientURLs:[https://192.168.50.21:2379]}","request-path":"/0/members/6b85f157810fe4ab/attributes","cluster-id":"6f04757488c993a3","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-14T14:49:40.469213Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-14T14:49:40.469702Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-14T14:49:40.470523Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-14T14:49:40.471272Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.21:2379"}
	{"level":"info","ts":"2024-10-14T14:49:40.471628Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-14T14:49:40.471679Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-14T14:49:40.472101Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-14T14:49:40.472912Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 14:49:46 up 1 min,  0 users,  load average: 1.38, 0.39, 0.13
	Linux kubernetes-upgrade-058309 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [50a9a22a0b9d4e486cb7b05d7c68ab4396b7ee2510b01d194b01824b8379572e] <==
	I1014 14:49:17.850452       1 options.go:228] external host was not specified, using 192.168.50.21
	I1014 14:49:17.871554       1 server.go:142] Version: v1.31.1
	I1014 14:49:17.871642       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-apiserver [c0a354ffc708c2c65ec3c1aa1236b3bf52594a3f60c0d857fadc65c065b5b439] <==
	I1014 14:49:41.953919       1 aggregator.go:171] initial CRD sync complete...
	I1014 14:49:41.953950       1 autoregister_controller.go:144] Starting autoregister controller
	I1014 14:49:41.953986       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1014 14:49:41.954009       1 cache.go:39] Caches are synced for autoregister controller
	I1014 14:49:41.981384       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1014 14:49:41.993486       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1014 14:49:41.993510       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1014 14:49:41.993780       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1014 14:49:41.996298       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1014 14:49:41.996427       1 policy_source.go:224] refreshing policies
	I1014 14:49:41.999455       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1014 14:49:41.999503       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1014 14:49:41.999706       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1014 14:49:42.000095       1 shared_informer.go:320] Caches are synced for configmaps
	E1014 14:49:42.000268       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1014 14:49:42.005094       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1014 14:49:42.012872       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 14:49:42.804894       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1014 14:49:43.439068       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1014 14:49:43.452265       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1014 14:49:43.511595       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1014 14:49:43.551489       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 14:49:43.562672       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1014 14:49:45.431735       1 controller.go:615] quota admission added evaluator for: endpoints
	I1014 14:49:45.632767       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [23448dc575bae199f2ab87412c9b266c124ca3e69a232778dbf7c2ca241ef5e6] <==
	
	
	==> kube-controller-manager [e473e65e4ebf1fc3053b341c7a9a6faefeb92b1c63bd50efceca48cc13251dd4] <==
	I1014 14:49:45.368768       1 shared_informer.go:320] Caches are synced for resource quota
	I1014 14:49:45.379085       1 shared_informer.go:320] Caches are synced for stateful set
	I1014 14:49:45.390527       1 shared_informer.go:320] Caches are synced for resource quota
	I1014 14:49:45.392216       1 actual_state_of_world.go:540] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"kubernetes-upgrade-058309\" does not exist"
	I1014 14:49:45.399970       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1014 14:49:45.400034       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="kubernetes-upgrade-058309"
	I1014 14:49:45.400565       1 shared_informer.go:320] Caches are synced for attach detach
	I1014 14:49:45.403201       1 shared_informer.go:320] Caches are synced for GC
	I1014 14:49:45.405636       1 shared_informer.go:320] Caches are synced for daemon sets
	I1014 14:49:45.422052       1 shared_informer.go:320] Caches are synced for persistent volume
	I1014 14:49:45.478170       1 shared_informer.go:320] Caches are synced for taint
	I1014 14:49:45.478312       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1014 14:49:45.478439       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="kubernetes-upgrade-058309"
	I1014 14:49:45.478493       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1014 14:49:45.478633       1 shared_informer.go:320] Caches are synced for TTL
	I1014 14:49:45.478678       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1014 14:49:45.490819       1 shared_informer.go:320] Caches are synced for node
	I1014 14:49:45.490910       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I1014 14:49:45.490945       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1014 14:49:45.490951       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1014 14:49:45.490957       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1014 14:49:45.491021       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-058309"
	I1014 14:49:45.887743       1 shared_informer.go:320] Caches are synced for garbage collector
	I1014 14:49:45.887771       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1014 14:49:45.895231       1 shared_informer.go:320] Caches are synced for garbage collector
	
	
	==> kube-proxy [89134f3282d1656982e8619a980fa156348484d927a578d8070d048cdc457407] <==
	E1014 14:49:35.418251       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1014 14:49:35.420628       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-058309\": dial tcp 192.168.50.21:8443: connect: connection refused"
	E1014 14:49:36.535173       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-058309\": dial tcp 192.168.50.21:8443: connect: connection refused"
	E1014 14:49:38.608086       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/kubernetes-upgrade-058309\": dial tcp 192.168.50.21:8443: connect: connection refused"
	I1014 14:49:43.158776       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.21"]
	E1014 14:49:43.158900       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 14:49:43.198845       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1014 14:49:43.198906       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 14:49:43.198945       1 server_linux.go:169] "Using iptables Proxier"
	I1014 14:49:43.202112       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 14:49:43.202786       1 server.go:483] "Version info" version="v1.31.1"
	I1014 14:49:43.202824       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 14:49:43.206006       1 config.go:199] "Starting service config controller"
	I1014 14:49:43.206076       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 14:49:43.206103       1 config.go:105] "Starting endpoint slice config controller"
	I1014 14:49:43.206107       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 14:49:43.206708       1 config.go:328] "Starting node config controller"
	I1014 14:49:43.206753       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 14:49:43.306514       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1014 14:49:43.306546       1 shared_informer.go:320] Caches are synced for service config
	I1014 14:49:43.306821       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [d5c832f52fe15e014ec29af0e3a8bc968e6e4dcf1c81f16101fee0bfdaa98ec9] <==
	
	
	==> kube-scheduler [71e048c416be0baa18fe4e008b5529914d01f0e7f2a2ecbd4a55ed8905bb1ab2] <==
	
	
	==> kube-scheduler [dbd5900fff6913d36e6abad9b4902bb4d5413fc09d63dd3179ca3206da8f12e8] <==
	I1014 14:49:39.611493       1 serving.go:386] Generated self-signed cert in-memory
	W1014 14:49:41.870896       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1014 14:49:41.872292       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1014 14:49:41.873431       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1014 14:49:41.873557       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1014 14:49:41.917880       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1014 14:49:41.918094       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 14:49:41.922867       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 14:49:41.922945       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1014 14:49:41.923057       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1014 14:49:41.923211       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 14:49:42.023203       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 14 14:49:38 kubernetes-upgrade-058309 kubelet[3996]: E1014 14:49:38.532868    3996 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.21:8443: connect: connection refused" node="kubernetes-upgrade-058309"
	Oct 14 14:49:38 kubernetes-upgrade-058309 kubelet[3996]: I1014 14:49:38.588991    3996 scope.go:117] "RemoveContainer" containerID="71e048c416be0baa18fe4e008b5529914d01f0e7f2a2ecbd4a55ed8905bb1ab2"
	Oct 14 14:49:38 kubernetes-upgrade-058309 kubelet[3996]: I1014 14:49:38.589299    3996 scope.go:117] "RemoveContainer" containerID="9929f871b60ebe0d0932b286f86502ec9d925f728aa785166ffa93c3bb63fc2c"
	Oct 14 14:49:38 kubernetes-upgrade-058309 kubelet[3996]: I1014 14:49:38.591069    3996 scope.go:117] "RemoveContainer" containerID="50a9a22a0b9d4e486cb7b05d7c68ab4396b7ee2510b01d194b01824b8379572e"
	Oct 14 14:49:38 kubernetes-upgrade-058309 kubelet[3996]: I1014 14:49:38.592540    3996 scope.go:117] "RemoveContainer" containerID="23448dc575bae199f2ab87412c9b266c124ca3e69a232778dbf7c2ca241ef5e6"
	Oct 14 14:49:38 kubernetes-upgrade-058309 kubelet[3996]: E1014 14:49:38.727048    3996 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/kubernetes-upgrade-058309?timeout=10s\": dial tcp 192.168.50.21:8443: connect: connection refused" interval="800ms"
	Oct 14 14:49:38 kubernetes-upgrade-058309 kubelet[3996]: W1014 14:49:38.921646    3996 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.50.21:8443: connect: connection refused
	Oct 14 14:49:38 kubernetes-upgrade-058309 kubelet[3996]: E1014 14:49:38.921713    3996 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.50.21:8443: connect: connection refused" logger="UnhandledError"
	Oct 14 14:49:38 kubernetes-upgrade-058309 kubelet[3996]: I1014 14:49:38.934640    3996 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-058309"
	Oct 14 14:49:38 kubernetes-upgrade-058309 kubelet[3996]: E1014 14:49:38.935544    3996 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.21:8443: connect: connection refused" node="kubernetes-upgrade-058309"
	Oct 14 14:49:39 kubernetes-upgrade-058309 kubelet[3996]: I1014 14:49:39.737754    3996 kubelet_node_status.go:72] "Attempting to register node" node="kubernetes-upgrade-058309"
	Oct 14 14:49:42 kubernetes-upgrade-058309 kubelet[3996]: I1014 14:49:42.072881    3996 kubelet_node_status.go:111] "Node was previously registered" node="kubernetes-upgrade-058309"
	Oct 14 14:49:42 kubernetes-upgrade-058309 kubelet[3996]: I1014 14:49:42.072986    3996 kubelet_node_status.go:75] "Successfully registered node" node="kubernetes-upgrade-058309"
	Oct 14 14:49:42 kubernetes-upgrade-058309 kubelet[3996]: I1014 14:49:42.073012    3996 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 14 14:49:42 kubernetes-upgrade-058309 kubelet[3996]: I1014 14:49:42.074274    3996 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 14 14:49:42 kubernetes-upgrade-058309 kubelet[3996]: I1014 14:49:42.098379    3996 apiserver.go:52] "Watching apiserver"
	Oct 14 14:49:42 kubernetes-upgrade-058309 kubelet[3996]: I1014 14:49:42.117713    3996 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 14 14:49:42 kubernetes-upgrade-058309 kubelet[3996]: I1014 14:49:42.207225    3996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/30703798-788f-42c0-8206-69b63f835f5e-xtables-lock\") pod \"kube-proxy-klr59\" (UID: \"30703798-788f-42c0-8206-69b63f835f5e\") " pod="kube-system/kube-proxy-klr59"
	Oct 14 14:49:42 kubernetes-upgrade-058309 kubelet[3996]: I1014 14:49:42.207512    3996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c1f8d0b8-cf29-453f-813e-490acd6a3801-tmp\") pod \"storage-provisioner\" (UID: \"c1f8d0b8-cf29-453f-813e-490acd6a3801\") " pod="kube-system/storage-provisioner"
	Oct 14 14:49:42 kubernetes-upgrade-058309 kubelet[3996]: I1014 14:49:42.209021    3996 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30703798-788f-42c0-8206-69b63f835f5e-lib-modules\") pod \"kube-proxy-klr59\" (UID: \"30703798-788f-42c0-8206-69b63f835f5e\") " pod="kube-system/kube-proxy-klr59"
	Oct 14 14:49:42 kubernetes-upgrade-058309 kubelet[3996]: I1014 14:49:42.401950    3996 scope.go:117] "RemoveContainer" containerID="b79910e970381ebcf1fa6fea1502c0689d8653ba6d32b950aac7209a7f098bd3"
	Oct 14 14:49:42 kubernetes-upgrade-058309 kubelet[3996]: I1014 14:49:42.414202    3996 scope.go:117] "RemoveContainer" containerID="d7b6b5c13e23378728d84e7abf3db40ff888ef1a8421a5ca4484b540657417b2"
	Oct 14 14:49:42 kubernetes-upgrade-058309 kubelet[3996]: I1014 14:49:42.414898    3996 scope.go:117] "RemoveContainer" containerID="88ab529b63edc11cfeecfa3f559f0be763330fb34abfdf0691523f2240ad201a"
	Oct 14 14:49:44 kubernetes-upgrade-058309 kubelet[3996]: I1014 14:49:44.740508    3996 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 14 14:49:45 kubernetes-upgrade-058309 kubelet[3996]: I1014 14:49:45.253724    3996 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [448321172f8ff1f590f6bd2b85d9f5623539255dca715ee2ea4c13a32b5aff56] <==
	I1014 14:49:42.622159       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1014 14:49:42.649718       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1014 14:49:42.651503       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [b79910e970381ebcf1fa6fea1502c0689d8653ba6d32b950aac7209a7f098bd3] <==
	I1014 14:49:17.594501       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1014 14:49:17.625139       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-058309 -n kubernetes-upgrade-058309
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-058309 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-058309" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-058309
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-058309: (1.172723416s)
--- FAIL: TestKubernetesUpgrade (401.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (301.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-399767 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-399767 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (5m0.784212233s)

                                                
                                                
-- stdout --
	* [old-k8s-version-399767] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19790-7836/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-7836/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-399767" primary control-plane node in "old-k8s-version-399767" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 14:51:50.271625   64875 out.go:345] Setting OutFile to fd 1 ...
	I1014 14:51:50.271732   64875 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:51:50.271741   64875 out.go:358] Setting ErrFile to fd 2...
	I1014 14:51:50.271745   64875 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:51:50.271905   64875 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
	I1014 14:51:50.272465   64875 out.go:352] Setting JSON to false
	I1014 14:51:50.273527   64875 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5660,"bootTime":1728911850,"procs":321,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 14:51:50.273617   64875 start.go:139] virtualization: kvm guest
	I1014 14:51:50.275710   64875 out.go:177] * [old-k8s-version-399767] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1014 14:51:50.276941   64875 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 14:51:50.276985   64875 notify.go:220] Checking for updates...
	I1014 14:51:50.279028   64875 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 14:51:50.280178   64875 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 14:51:50.281270   64875 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 14:51:50.282529   64875 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 14:51:50.283725   64875 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 14:51:50.285180   64875 config.go:182] Loaded profile config "bridge-517678": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 14:51:50.285276   64875 config.go:182] Loaded profile config "enable-default-cni-517678": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 14:51:50.285353   64875 config.go:182] Loaded profile config "flannel-517678": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 14:51:50.285443   64875 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 14:51:50.323658   64875 out.go:177] * Using the kvm2 driver based on user configuration
	I1014 14:51:50.324987   64875 start.go:297] selected driver: kvm2
	I1014 14:51:50.325009   64875 start.go:901] validating driver "kvm2" against <nil>
	I1014 14:51:50.325025   64875 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 14:51:50.325752   64875 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 14:51:50.325872   64875 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19790-7836/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 14:51:50.344491   64875 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1014 14:51:50.344552   64875 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 14:51:50.344829   64875 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 14:51:50.344865   64875 cni.go:84] Creating CNI manager for ""
	I1014 14:51:50.344918   64875 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 14:51:50.344932   64875 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1014 14:51:50.345015   64875 start.go:340] cluster config:
	{Name:old-k8s-version-399767 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-399767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 14:51:50.345151   64875 iso.go:125] acquiring lock: {Name:mk2e2e780b05ead4007a93e6b56d28c6081926bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 14:51:50.347498   64875 out.go:177] * Starting "old-k8s-version-399767" primary control-plane node in "old-k8s-version-399767" cluster
	I1014 14:51:50.348934   64875 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1014 14:51:50.348977   64875 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1014 14:51:50.348987   64875 cache.go:56] Caching tarball of preloaded images
	I1014 14:51:50.349096   64875 preload.go:172] Found /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 14:51:50.349111   64875 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1014 14:51:50.349240   64875 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/config.json ...
	I1014 14:51:50.349267   64875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/config.json: {Name:mk6dec77b979ac86206c5485c77783d677476eaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 14:51:50.349441   64875 start.go:360] acquireMachinesLock for old-k8s-version-399767: {Name:mk0b928e3a7c606d1470b2064477a22c5e59969d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 14:52:17.911929   64875 start.go:364] duration metric: took 27.562425838s to acquireMachinesLock for "old-k8s-version-399767"
	I1014 14:52:17.912024   64875 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-399767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-399767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 14:52:17.912170   64875 start.go:125] createHost starting for "" (driver="kvm2")
	I1014 14:52:17.914625   64875 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 14:52:17.914832   64875 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:52:17.914884   64875 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:52:17.931217   64875 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46011
	I1014 14:52:17.931638   64875 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:52:17.932174   64875 main.go:141] libmachine: Using API Version  1
	I1014 14:52:17.932208   64875 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:52:17.932502   64875 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:52:17.932703   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetMachineName
	I1014 14:52:17.932876   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 14:52:17.933028   64875 start.go:159] libmachine.API.Create for "old-k8s-version-399767" (driver="kvm2")
	I1014 14:52:17.933065   64875 client.go:168] LocalClient.Create starting
	I1014 14:52:17.933101   64875 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem
	I1014 14:52:17.933135   64875 main.go:141] libmachine: Decoding PEM data...
	I1014 14:52:17.933165   64875 main.go:141] libmachine: Parsing certificate...
	I1014 14:52:17.933233   64875 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem
	I1014 14:52:17.933259   64875 main.go:141] libmachine: Decoding PEM data...
	I1014 14:52:17.933273   64875 main.go:141] libmachine: Parsing certificate...
	I1014 14:52:17.933298   64875 main.go:141] libmachine: Running pre-create checks...
	I1014 14:52:17.933311   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .PreCreateCheck
	I1014 14:52:17.933703   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetConfigRaw
	I1014 14:52:17.934100   64875 main.go:141] libmachine: Creating machine...
	I1014 14:52:17.934112   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .Create
	I1014 14:52:17.934261   64875 main.go:141] libmachine: (old-k8s-version-399767) Creating KVM machine...
	I1014 14:52:17.935447   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | found existing default KVM network
	I1014 14:52:17.936940   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 14:52:17.936769   65339 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:95:45:84} reservation:<nil>}
	I1014 14:52:17.938165   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 14:52:17.938079   65339 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:8b:8f:90} reservation:<nil>}
	I1014 14:52:17.939137   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 14:52:17.939043   65339 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:e3:c8:a9} reservation:<nil>}
	I1014 14:52:17.940373   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 14:52:17.940283   65339 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000285b70}
	I1014 14:52:17.940403   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | created network xml: 
	I1014 14:52:17.940416   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | <network>
	I1014 14:52:17.940424   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG |   <name>mk-old-k8s-version-399767</name>
	I1014 14:52:17.940442   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG |   <dns enable='no'/>
	I1014 14:52:17.940451   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG |   
	I1014 14:52:17.940461   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I1014 14:52:17.940475   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG |     <dhcp>
	I1014 14:52:17.940532   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I1014 14:52:17.940552   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG |     </dhcp>
	I1014 14:52:17.940578   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG |   </ip>
	I1014 14:52:17.940609   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG |   
	I1014 14:52:17.940622   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | </network>
	I1014 14:52:17.940632   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | 
	I1014 14:52:17.946012   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | trying to create private KVM network mk-old-k8s-version-399767 192.168.72.0/24...
	I1014 14:52:18.019947   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | private KVM network mk-old-k8s-version-399767 192.168.72.0/24 created
	I1014 14:52:18.019980   64875 main.go:141] libmachine: (old-k8s-version-399767) Setting up store path in /home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767 ...
	I1014 14:52:18.019992   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 14:52:18.019916   65339 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 14:52:18.020007   64875 main.go:141] libmachine: (old-k8s-version-399767) Building disk image from file:///home/jenkins/minikube-integration/19790-7836/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1014 14:52:18.020235   64875 main.go:141] libmachine: (old-k8s-version-399767) Downloading /home/jenkins/minikube-integration/19790-7836/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19790-7836/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1014 14:52:18.268091   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 14:52:18.267980   65339 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa...
	I1014 14:52:18.434798   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 14:52:18.434704   65339 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/old-k8s-version-399767.rawdisk...
	I1014 14:52:18.434919   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | Writing magic tar header
	I1014 14:52:18.434959   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | Writing SSH key tar header
	I1014 14:52:18.434986   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 14:52:18.434915   65339 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767 ...
	I1014 14:52:18.435100   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767
	I1014 14:52:18.435120   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube/machines
	I1014 14:52:18.435135   64875 main.go:141] libmachine: (old-k8s-version-399767) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767 (perms=drwx------)
	I1014 14:52:18.435169   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 14:52:18.435203   64875 main.go:141] libmachine: (old-k8s-version-399767) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube/machines (perms=drwxr-xr-x)
	I1014 14:52:18.435229   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836
	I1014 14:52:18.435243   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1014 14:52:18.435254   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | Checking permissions on dir: /home/jenkins
	I1014 14:52:18.435263   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | Checking permissions on dir: /home
	I1014 14:52:18.435272   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | Skipping /home - not owner
	I1014 14:52:18.435306   64875 main.go:141] libmachine: (old-k8s-version-399767) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube (perms=drwxr-xr-x)
	I1014 14:52:18.435331   64875 main.go:141] libmachine: (old-k8s-version-399767) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836 (perms=drwxrwxr-x)
	I1014 14:52:18.435348   64875 main.go:141] libmachine: (old-k8s-version-399767) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1014 14:52:18.435360   64875 main.go:141] libmachine: (old-k8s-version-399767) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1014 14:52:18.435375   64875 main.go:141] libmachine: (old-k8s-version-399767) Creating domain...
	I1014 14:52:18.436429   64875 main.go:141] libmachine: (old-k8s-version-399767) define libvirt domain using xml: 
	I1014 14:52:18.436449   64875 main.go:141] libmachine: (old-k8s-version-399767) <domain type='kvm'>
	I1014 14:52:18.436458   64875 main.go:141] libmachine: (old-k8s-version-399767)   <name>old-k8s-version-399767</name>
	I1014 14:52:18.436465   64875 main.go:141] libmachine: (old-k8s-version-399767)   <memory unit='MiB'>2200</memory>
	I1014 14:52:18.436480   64875 main.go:141] libmachine: (old-k8s-version-399767)   <vcpu>2</vcpu>
	I1014 14:52:18.436491   64875 main.go:141] libmachine: (old-k8s-version-399767)   <features>
	I1014 14:52:18.436504   64875 main.go:141] libmachine: (old-k8s-version-399767)     <acpi/>
	I1014 14:52:18.436513   64875 main.go:141] libmachine: (old-k8s-version-399767)     <apic/>
	I1014 14:52:18.436523   64875 main.go:141] libmachine: (old-k8s-version-399767)     <pae/>
	I1014 14:52:18.436533   64875 main.go:141] libmachine: (old-k8s-version-399767)     
	I1014 14:52:18.436548   64875 main.go:141] libmachine: (old-k8s-version-399767)   </features>
	I1014 14:52:18.436561   64875 main.go:141] libmachine: (old-k8s-version-399767)   <cpu mode='host-passthrough'>
	I1014 14:52:18.436567   64875 main.go:141] libmachine: (old-k8s-version-399767)   
	I1014 14:52:18.436573   64875 main.go:141] libmachine: (old-k8s-version-399767)   </cpu>
	I1014 14:52:18.436579   64875 main.go:141] libmachine: (old-k8s-version-399767)   <os>
	I1014 14:52:18.436585   64875 main.go:141] libmachine: (old-k8s-version-399767)     <type>hvm</type>
	I1014 14:52:18.436590   64875 main.go:141] libmachine: (old-k8s-version-399767)     <boot dev='cdrom'/>
	I1014 14:52:18.436596   64875 main.go:141] libmachine: (old-k8s-version-399767)     <boot dev='hd'/>
	I1014 14:52:18.436602   64875 main.go:141] libmachine: (old-k8s-version-399767)     <bootmenu enable='no'/>
	I1014 14:52:18.436608   64875 main.go:141] libmachine: (old-k8s-version-399767)   </os>
	I1014 14:52:18.436617   64875 main.go:141] libmachine: (old-k8s-version-399767)   <devices>
	I1014 14:52:18.436625   64875 main.go:141] libmachine: (old-k8s-version-399767)     <disk type='file' device='cdrom'>
	I1014 14:52:18.436632   64875 main.go:141] libmachine: (old-k8s-version-399767)       <source file='/home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/boot2docker.iso'/>
	I1014 14:52:18.436639   64875 main.go:141] libmachine: (old-k8s-version-399767)       <target dev='hdc' bus='scsi'/>
	I1014 14:52:18.436644   64875 main.go:141] libmachine: (old-k8s-version-399767)       <readonly/>
	I1014 14:52:18.436650   64875 main.go:141] libmachine: (old-k8s-version-399767)     </disk>
	I1014 14:52:18.436656   64875 main.go:141] libmachine: (old-k8s-version-399767)     <disk type='file' device='disk'>
	I1014 14:52:18.436663   64875 main.go:141] libmachine: (old-k8s-version-399767)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1014 14:52:18.436677   64875 main.go:141] libmachine: (old-k8s-version-399767)       <source file='/home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/old-k8s-version-399767.rawdisk'/>
	I1014 14:52:18.436684   64875 main.go:141] libmachine: (old-k8s-version-399767)       <target dev='hda' bus='virtio'/>
	I1014 14:52:18.436696   64875 main.go:141] libmachine: (old-k8s-version-399767)     </disk>
	I1014 14:52:18.436709   64875 main.go:141] libmachine: (old-k8s-version-399767)     <interface type='network'>
	I1014 14:52:18.436724   64875 main.go:141] libmachine: (old-k8s-version-399767)       <source network='mk-old-k8s-version-399767'/>
	I1014 14:52:18.436735   64875 main.go:141] libmachine: (old-k8s-version-399767)       <model type='virtio'/>
	I1014 14:52:18.436747   64875 main.go:141] libmachine: (old-k8s-version-399767)     </interface>
	I1014 14:52:18.436757   64875 main.go:141] libmachine: (old-k8s-version-399767)     <interface type='network'>
	I1014 14:52:18.436768   64875 main.go:141] libmachine: (old-k8s-version-399767)       <source network='default'/>
	I1014 14:52:18.436783   64875 main.go:141] libmachine: (old-k8s-version-399767)       <model type='virtio'/>
	I1014 14:52:18.436804   64875 main.go:141] libmachine: (old-k8s-version-399767)     </interface>
	I1014 14:52:18.436815   64875 main.go:141] libmachine: (old-k8s-version-399767)     <serial type='pty'>
	I1014 14:52:18.436827   64875 main.go:141] libmachine: (old-k8s-version-399767)       <target port='0'/>
	I1014 14:52:18.436837   64875 main.go:141] libmachine: (old-k8s-version-399767)     </serial>
	I1014 14:52:18.436847   64875 main.go:141] libmachine: (old-k8s-version-399767)     <console type='pty'>
	I1014 14:52:18.436862   64875 main.go:141] libmachine: (old-k8s-version-399767)       <target type='serial' port='0'/>
	I1014 14:52:18.436880   64875 main.go:141] libmachine: (old-k8s-version-399767)     </console>
	I1014 14:52:18.436891   64875 main.go:141] libmachine: (old-k8s-version-399767)     <rng model='virtio'>
	I1014 14:52:18.436904   64875 main.go:141] libmachine: (old-k8s-version-399767)       <backend model='random'>/dev/random</backend>
	I1014 14:52:18.436915   64875 main.go:141] libmachine: (old-k8s-version-399767)     </rng>
	I1014 14:52:18.436925   64875 main.go:141] libmachine: (old-k8s-version-399767)     
	I1014 14:52:18.436938   64875 main.go:141] libmachine: (old-k8s-version-399767)     
	I1014 14:52:18.436950   64875 main.go:141] libmachine: (old-k8s-version-399767)   </devices>
	I1014 14:52:18.436960   64875 main.go:141] libmachine: (old-k8s-version-399767) </domain>
	I1014 14:52:18.436973   64875 main.go:141] libmachine: (old-k8s-version-399767) 
	I1014 14:52:18.445661   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:a1:d0:42 in network default
	I1014 14:52:18.446244   64875 main.go:141] libmachine: (old-k8s-version-399767) Ensuring networks are active...
	I1014 14:52:18.446268   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:18.447119   64875 main.go:141] libmachine: (old-k8s-version-399767) Ensuring network default is active
	I1014 14:52:18.447503   64875 main.go:141] libmachine: (old-k8s-version-399767) Ensuring network mk-old-k8s-version-399767 is active
	I1014 14:52:18.448056   64875 main.go:141] libmachine: (old-k8s-version-399767) Getting domain xml...
	I1014 14:52:18.448836   64875 main.go:141] libmachine: (old-k8s-version-399767) Creating domain...
	I1014 14:52:20.039844   64875 main.go:141] libmachine: (old-k8s-version-399767) Waiting to get IP...
	I1014 14:52:20.040913   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:20.041427   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 14:52:20.041483   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 14:52:20.041406   65339 retry.go:31] will retry after 247.473207ms: waiting for machine to come up
	I1014 14:52:20.290992   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:20.291674   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 14:52:20.291694   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 14:52:20.291602   65339 retry.go:31] will retry after 341.38804ms: waiting for machine to come up
	I1014 14:52:20.634001   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:20.634485   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 14:52:20.634505   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 14:52:20.634429   65339 retry.go:31] will retry after 335.07989ms: waiting for machine to come up
	I1014 14:52:20.970889   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:20.971501   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 14:52:20.971518   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 14:52:20.971429   65339 retry.go:31] will retry after 520.308199ms: waiting for machine to come up
	I1014 14:52:21.495890   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:21.496495   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 14:52:21.496519   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 14:52:21.496403   65339 retry.go:31] will retry after 688.714137ms: waiting for machine to come up
	I1014 14:52:22.195708   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:22.196242   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 14:52:22.196283   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 14:52:22.196197   65339 retry.go:31] will retry after 697.110532ms: waiting for machine to come up
	I1014 14:52:22.894932   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:22.895593   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 14:52:22.895612   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 14:52:22.895512   65339 retry.go:31] will retry after 1.03845662s: waiting for machine to come up
	I1014 14:52:23.935903   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:23.936593   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 14:52:23.936617   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 14:52:23.936505   65339 retry.go:31] will retry after 1.352338264s: waiting for machine to come up
	I1014 14:52:25.290075   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:25.290584   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 14:52:25.290628   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 14:52:25.290526   65339 retry.go:31] will retry after 1.766919996s: waiting for machine to come up
	I1014 14:52:27.059041   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:27.059634   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 14:52:27.059659   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 14:52:27.059576   65339 retry.go:31] will retry after 1.421938069s: waiting for machine to come up
	I1014 14:52:28.482555   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:28.483044   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 14:52:28.483071   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 14:52:28.483007   65339 retry.go:31] will retry after 1.984498418s: waiting for machine to come up
	I1014 14:52:30.869808   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:30.870752   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 14:52:30.870776   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 14:52:30.870699   65339 retry.go:31] will retry after 3.272382318s: waiting for machine to come up
	I1014 14:52:34.145157   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:34.145644   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 14:52:34.145674   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 14:52:34.145612   65339 retry.go:31] will retry after 2.791770406s: waiting for machine to come up
	I1014 14:52:36.939622   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:36.940419   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 14:52:36.940441   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 14:52:36.940379   65339 retry.go:31] will retry after 4.702198862s: waiting for machine to come up
	I1014 14:52:41.644212   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:41.644744   64875 main.go:141] libmachine: (old-k8s-version-399767) Found IP for machine: 192.168.72.138
	I1014 14:52:41.644785   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has current primary IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:41.644795   64875 main.go:141] libmachine: (old-k8s-version-399767) Reserving static IP address...
	I1014 14:52:41.645170   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-399767", mac: "52:54:00:87:01:70", ip: "192.168.72.138"} in network mk-old-k8s-version-399767
	I1014 14:52:41.729301   64875 main.go:141] libmachine: (old-k8s-version-399767) Reserved static IP address: 192.168.72.138
	I1014 14:52:41.729326   64875 main.go:141] libmachine: (old-k8s-version-399767) Waiting for SSH to be available...
	I1014 14:52:41.729380   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | Getting to WaitForSSH function...
	I1014 14:52:41.732137   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:41.732581   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 15:52:35 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:minikube Clientid:01:52:54:00:87:01:70}
	I1014 14:52:41.732612   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:41.732790   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | Using SSH client type: external
	I1014 14:52:41.732829   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa (-rw-------)
	I1014 14:52:41.732890   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.138 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 14:52:41.732917   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | About to run SSH command:
	I1014 14:52:41.732929   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | exit 0
	I1014 14:52:41.871843   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | SSH cmd err, output: <nil>: 
	I1014 14:52:41.872180   64875 main.go:141] libmachine: (old-k8s-version-399767) KVM machine creation complete!
	I1014 14:52:41.872543   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetConfigRaw
	I1014 14:52:41.873249   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 14:52:41.873518   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 14:52:41.873701   64875 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1014 14:52:41.873719   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetState
	I1014 14:52:41.875248   64875 main.go:141] libmachine: Detecting operating system of created instance...
	I1014 14:52:41.875265   64875 main.go:141] libmachine: Waiting for SSH to be available...
	I1014 14:52:41.875271   64875 main.go:141] libmachine: Getting to WaitForSSH function...
	I1014 14:52:41.875279   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 14:52:41.878385   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:41.878809   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 15:52:35 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 14:52:41.878838   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:41.878912   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 14:52:41.879110   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 14:52:41.879303   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 14:52:41.879464   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 14:52:41.879668   64875 main.go:141] libmachine: Using SSH client type: native
	I1014 14:52:41.879915   64875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1014 14:52:41.879934   64875 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1014 14:52:41.994821   64875 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 14:52:41.994841   64875 main.go:141] libmachine: Detecting the provisioner...
	I1014 14:52:41.994849   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 14:52:41.998375   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:41.998826   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 15:52:35 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 14:52:41.998861   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:41.999013   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 14:52:41.999364   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 14:52:41.999554   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 14:52:41.999729   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 14:52:41.999884   64875 main.go:141] libmachine: Using SSH client type: native
	I1014 14:52:42.000099   64875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1014 14:52:42.000114   64875 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1014 14:52:42.119889   64875 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1014 14:52:42.119997   64875 main.go:141] libmachine: found compatible host: buildroot
	I1014 14:52:42.120011   64875 main.go:141] libmachine: Provisioning with buildroot...
	I1014 14:52:42.120024   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetMachineName
	I1014 14:52:42.120287   64875 buildroot.go:166] provisioning hostname "old-k8s-version-399767"
	I1014 14:52:42.120316   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetMachineName
	I1014 14:52:42.120531   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 14:52:42.123734   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:42.124095   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 15:52:35 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 14:52:42.124134   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:42.124340   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 14:52:42.124524   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 14:52:42.124796   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 14:52:42.124946   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 14:52:42.125161   64875 main.go:141] libmachine: Using SSH client type: native
	I1014 14:52:42.125377   64875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1014 14:52:42.125395   64875 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-399767 && echo "old-k8s-version-399767" | sudo tee /etc/hostname
	I1014 14:52:42.264156   64875 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-399767
	
	I1014 14:52:42.264181   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 14:52:42.267120   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:42.267587   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 15:52:35 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 14:52:42.267636   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:42.267793   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 14:52:42.267973   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 14:52:42.268171   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 14:52:42.268383   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 14:52:42.268590   64875 main.go:141] libmachine: Using SSH client type: native
	I1014 14:52:42.268773   64875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1014 14:52:42.268800   64875 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-399767' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-399767/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-399767' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 14:52:42.396240   64875 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 14:52:42.396273   64875 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 14:52:42.396300   64875 buildroot.go:174] setting up certificates
	I1014 14:52:42.396313   64875 provision.go:84] configureAuth start
	I1014 14:52:42.396327   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetMachineName
	I1014 14:52:42.396613   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetIP
	I1014 14:52:42.399661   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:42.400047   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 15:52:35 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 14:52:42.400076   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:42.400224   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 14:52:42.403500   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:42.403940   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 15:52:35 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 14:52:42.403980   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:42.404088   64875 provision.go:143] copyHostCerts
	I1014 14:52:42.404148   64875 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 14:52:42.404168   64875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 14:52:42.404234   64875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 14:52:42.404393   64875 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 14:52:42.404404   64875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 14:52:42.404433   64875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 14:52:42.404503   64875 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 14:52:42.404513   64875 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 14:52:42.404539   64875 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 14:52:42.404603   64875 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-399767 san=[127.0.0.1 192.168.72.138 localhost minikube old-k8s-version-399767]
	I1014 14:52:42.445086   64875 provision.go:177] copyRemoteCerts
	I1014 14:52:42.445142   64875 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 14:52:42.445162   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 14:52:42.448352   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:42.448722   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 15:52:35 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 14:52:42.448755   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:42.448975   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 14:52:42.449191   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 14:52:42.449381   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 14:52:42.449550   64875 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa Username:docker}
	I1014 14:52:42.539164   64875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1014 14:52:42.563119   64875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 14:52:42.587903   64875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 14:52:42.612069   64875 provision.go:87] duration metric: took 215.741813ms to configureAuth
	I1014 14:52:42.612105   64875 buildroot.go:189] setting minikube options for container-runtime
	I1014 14:52:42.612281   64875 config.go:182] Loaded profile config "old-k8s-version-399767": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1014 14:52:42.612348   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 14:52:42.615291   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:42.615633   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 15:52:35 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 14:52:42.615665   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:42.615856   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 14:52:42.616047   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 14:52:42.616202   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 14:52:42.616364   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 14:52:42.616517   64875 main.go:141] libmachine: Using SSH client type: native
	I1014 14:52:42.616683   64875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1014 14:52:42.616697   64875 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 14:52:42.853886   64875 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 14:52:42.853913   64875 main.go:141] libmachine: Checking connection to Docker...
	I1014 14:52:42.853921   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetURL
	I1014 14:52:42.855199   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | Using libvirt version 6000000
	I1014 14:52:42.857302   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:42.857680   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 15:52:35 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 14:52:42.857723   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:42.857865   64875 main.go:141] libmachine: Docker is up and running!
	I1014 14:52:42.857880   64875 main.go:141] libmachine: Reticulating splines...
	I1014 14:52:42.857887   64875 client.go:171] duration metric: took 24.924811143s to LocalClient.Create
	I1014 14:52:42.857916   64875 start.go:167] duration metric: took 24.92488985s to libmachine.API.Create "old-k8s-version-399767"
	I1014 14:52:42.857929   64875 start.go:293] postStartSetup for "old-k8s-version-399767" (driver="kvm2")
	I1014 14:52:42.857944   64875 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 14:52:42.857969   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 14:52:42.858206   64875 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 14:52:42.858232   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 14:52:42.860468   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:42.860777   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 15:52:35 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 14:52:42.860804   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:42.860951   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 14:52:42.861133   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 14:52:42.861424   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 14:52:42.861560   64875 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa Username:docker}
	I1014 14:52:42.950914   64875 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 14:52:42.955227   64875 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 14:52:42.955250   64875 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 14:52:42.955312   64875 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 14:52:42.955382   64875 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 14:52:42.955479   64875 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 14:52:42.964980   64875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 14:52:42.989758   64875 start.go:296] duration metric: took 131.811536ms for postStartSetup
	I1014 14:52:42.989818   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetConfigRaw
	I1014 14:52:42.990462   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetIP
	I1014 14:52:42.993077   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:42.993553   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 15:52:35 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 14:52:42.993578   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:42.993842   64875 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/config.json ...
	I1014 14:52:42.994146   64875 start.go:128] duration metric: took 25.081964179s to createHost
	I1014 14:52:42.994172   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 14:52:42.996712   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:42.997100   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 15:52:35 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 14:52:42.997127   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:42.997301   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 14:52:42.997492   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 14:52:42.997646   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 14:52:42.997776   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 14:52:42.997919   64875 main.go:141] libmachine: Using SSH client type: native
	I1014 14:52:42.998131   64875 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1014 14:52:42.998146   64875 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 14:52:43.115604   64875 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728917563.099704446
	
	I1014 14:52:43.115630   64875 fix.go:216] guest clock: 1728917563.099704446
	I1014 14:52:43.115637   64875 fix.go:229] Guest: 2024-10-14 14:52:43.099704446 +0000 UTC Remote: 2024-10-14 14:52:42.994160415 +0000 UTC m=+52.759910942 (delta=105.544031ms)
	I1014 14:52:43.115678   64875 fix.go:200] guest clock delta is within tolerance: 105.544031ms
	I1014 14:52:43.115686   64875 start.go:83] releasing machines lock for "old-k8s-version-399767", held for 25.203701487s
	I1014 14:52:43.115717   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 14:52:43.115975   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetIP
	I1014 14:52:43.118895   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:43.119316   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 15:52:35 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 14:52:43.119374   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:43.119518   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 14:52:43.119984   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 14:52:43.120138   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 14:52:43.120219   64875 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 14:52:43.120263   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 14:52:43.120397   64875 ssh_runner.go:195] Run: cat /version.json
	I1014 14:52:43.120437   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 14:52:43.123100   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:43.123488   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 15:52:35 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 14:52:43.123510   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:43.123531   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:43.123644   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 14:52:43.123798   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 14:52:43.123983   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 15:52:35 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 14:52:43.124006   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:43.124018   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 14:52:43.124154   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 14:52:43.124228   64875 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa Username:docker}
	I1014 14:52:43.124322   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 14:52:43.124481   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 14:52:43.124624   64875 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa Username:docker}
	I1014 14:52:43.244232   64875 ssh_runner.go:195] Run: systemctl --version
	I1014 14:52:43.250734   64875 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 14:52:43.419594   64875 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 14:52:43.425770   64875 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 14:52:43.425850   64875 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 14:52:43.442228   64875 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 14:52:43.442260   64875 start.go:495] detecting cgroup driver to use...
	I1014 14:52:43.442328   64875 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 14:52:43.459338   64875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 14:52:43.474211   64875 docker.go:217] disabling cri-docker service (if available) ...
	I1014 14:52:43.474265   64875 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 14:52:43.488521   64875 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 14:52:43.502748   64875 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 14:52:43.619394   64875 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 14:52:43.759960   64875 docker.go:233] disabling docker service ...
	I1014 14:52:43.760013   64875 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 14:52:43.774449   64875 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 14:52:43.788163   64875 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 14:52:43.933760   64875 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 14:52:44.064990   64875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 14:52:44.079779   64875 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 14:52:44.099105   64875 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1014 14:52:44.099182   64875 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:52:44.109759   64875 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 14:52:44.109825   64875 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:52:44.120697   64875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:52:44.131058   64875 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 14:52:44.141857   64875 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 14:52:44.152943   64875 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 14:52:44.162550   64875 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 14:52:44.162613   64875 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 14:52:44.175118   64875 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 14:52:44.192127   64875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 14:52:44.323190   64875 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 14:52:44.437337   64875 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 14:52:44.437415   64875 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 14:52:44.442216   64875 start.go:563] Will wait 60s for crictl version
	I1014 14:52:44.442264   64875 ssh_runner.go:195] Run: which crictl
	I1014 14:52:44.445946   64875 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 14:52:44.492208   64875 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 14:52:44.492287   64875 ssh_runner.go:195] Run: crio --version
	I1014 14:52:44.533153   64875 ssh_runner.go:195] Run: crio --version
	I1014 14:52:44.576275   64875 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1014 14:52:44.578591   64875 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetIP
	I1014 14:52:44.585760   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:44.586144   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 15:52:35 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 14:52:44.586172   64875 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 14:52:44.586659   64875 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1014 14:52:44.593033   64875 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 14:52:44.613535   64875 kubeadm.go:883] updating cluster {Name:old-k8s-version-399767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-399767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.138 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 14:52:44.613653   64875 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1014 14:52:44.613707   64875 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 14:52:44.661152   64875 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1014 14:52:44.661240   64875 ssh_runner.go:195] Run: which lz4
	I1014 14:52:44.666401   64875 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 14:52:44.672138   64875 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 14:52:44.672176   64875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1014 14:52:46.488012   64875 crio.go:462] duration metric: took 1.821659956s to copy over tarball
	I1014 14:52:46.488098   64875 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 14:52:49.576905   64875 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.088772753s)
	I1014 14:52:49.576935   64875 crio.go:469] duration metric: took 3.08888673s to extract the tarball
	I1014 14:52:49.576943   64875 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 14:52:49.646067   64875 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 14:52:49.700501   64875 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1014 14:52:49.700557   64875 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1014 14:52:49.700622   64875 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 14:52:49.700643   64875 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 14:52:49.700657   64875 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1014 14:52:49.700668   64875 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1014 14:52:49.700693   64875 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1014 14:52:49.700763   64875 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1014 14:52:49.700629   64875 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1014 14:52:49.700903   64875 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1014 14:52:49.702542   64875 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 14:52:49.702568   64875 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 14:52:49.702542   64875 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1014 14:52:49.702618   64875 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1014 14:52:49.702625   64875 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1014 14:52:49.702634   64875 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1014 14:52:49.702585   64875 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1014 14:52:49.702776   64875 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1014 14:52:49.865462   64875 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1014 14:52:49.871583   64875 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1014 14:52:49.883932   64875 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1014 14:52:49.890709   64875 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 14:52:49.904476   64875 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1014 14:52:49.916488   64875 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1014 14:52:49.930471   64875 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1014 14:52:49.936246   64875 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1014 14:52:49.936339   64875 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1014 14:52:49.936404   64875 ssh_runner.go:195] Run: which crictl
	I1014 14:52:50.007968   64875 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1014 14:52:50.008023   64875 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1014 14:52:50.008068   64875 ssh_runner.go:195] Run: which crictl
	I1014 14:52:50.015214   64875 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1014 14:52:50.015260   64875 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1014 14:52:50.015311   64875 ssh_runner.go:195] Run: which crictl
	I1014 14:52:50.070363   64875 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1014 14:52:50.070390   64875 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1014 14:52:50.070421   64875 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 14:52:50.070421   64875 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1014 14:52:50.070428   64875 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1014 14:52:50.070442   64875 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1014 14:52:50.070468   64875 ssh_runner.go:195] Run: which crictl
	I1014 14:52:50.070468   64875 ssh_runner.go:195] Run: which crictl
	I1014 14:52:50.070469   64875 ssh_runner.go:195] Run: which crictl
	I1014 14:52:50.076158   64875 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1014 14:52:50.076207   64875 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1014 14:52:50.076220   64875 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1014 14:52:50.076246   64875 ssh_runner.go:195] Run: which crictl
	I1014 14:52:50.076249   64875 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1014 14:52:50.076328   64875 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1014 14:52:50.086975   64875 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 14:52:50.087005   64875 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1014 14:52:50.087091   64875 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1014 14:52:50.209075   64875 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1014 14:52:50.209104   64875 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1014 14:52:50.227614   64875 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1014 14:52:50.227709   64875 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1014 14:52:50.242354   64875 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 14:52:50.242399   64875 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1014 14:52:50.242513   64875 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1014 14:52:50.374055   64875 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1014 14:52:50.374085   64875 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1014 14:52:50.374127   64875 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1014 14:52:50.408407   64875 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 14:52:50.408443   64875 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1014 14:52:50.425203   64875 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1014 14:52:50.425304   64875 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1014 14:52:50.545066   64875 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1014 14:52:50.548840   64875 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1014 14:52:50.548891   64875 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1014 14:52:50.607044   64875 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1014 14:52:50.607131   64875 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1014 14:52:50.610242   64875 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1014 14:52:50.610398   64875 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1014 14:52:50.625572   64875 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 14:52:50.634250   64875 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1014 14:52:50.783215   64875 cache_images.go:92] duration metric: took 1.082636401s to LoadCachedImages
	W1014 14:52:50.783314   64875 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I1014 14:52:50.783384   64875 kubeadm.go:934] updating node { 192.168.72.138 8443 v1.20.0 crio true true} ...
	I1014 14:52:50.783513   64875 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-399767 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.138
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-399767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 14:52:50.783610   64875 ssh_runner.go:195] Run: crio config
	I1014 14:52:50.855603   64875 cni.go:84] Creating CNI manager for ""
	I1014 14:52:50.855630   64875 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 14:52:50.855643   64875 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 14:52:50.855667   64875 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.138 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-399767 NodeName:old-k8s-version-399767 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.138"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.138 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1014 14:52:50.855825   64875 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.138
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-399767"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.138
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.138"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 14:52:50.855892   64875 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1014 14:52:50.871002   64875 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 14:52:50.871115   64875 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 14:52:50.886136   64875 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1014 14:52:50.909796   64875 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 14:52:50.932062   64875 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1014 14:52:50.953528   64875 ssh_runner.go:195] Run: grep 192.168.72.138	control-plane.minikube.internal$ /etc/hosts
	I1014 14:52:50.957827   64875 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.138	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 14:52:50.972565   64875 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 14:52:51.102246   64875 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 14:52:51.125235   64875 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767 for IP: 192.168.72.138
	I1014 14:52:51.125259   64875 certs.go:194] generating shared ca certs ...
	I1014 14:52:51.125280   64875 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 14:52:51.125449   64875 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 14:52:51.125516   64875 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 14:52:51.125531   64875 certs.go:256] generating profile certs ...
	I1014 14:52:51.125607   64875 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/client.key
	I1014 14:52:51.125630   64875 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/client.crt with IP's: []
	I1014 14:52:51.403437   64875 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/client.crt ...
	I1014 14:52:51.403466   64875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/client.crt: {Name:mk243d3339f01c7a3b269f6f9288f0853439b8e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 14:52:51.403657   64875 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/client.key ...
	I1014 14:52:51.403679   64875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/client.key: {Name:mk905484181dce1dadc00cb34d6fd19817467db3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 14:52:51.403813   64875 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/apiserver.key.c5ef93ea
	I1014 14:52:51.403837   64875 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/apiserver.crt.c5ef93ea with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.138]
	I1014 14:52:51.684306   64875 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/apiserver.crt.c5ef93ea ...
	I1014 14:52:51.684335   64875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/apiserver.crt.c5ef93ea: {Name:mk2fdb72fa11048a5e09187aac147f8077f7a611 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 14:52:51.684514   64875 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/apiserver.key.c5ef93ea ...
	I1014 14:52:51.684532   64875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/apiserver.key.c5ef93ea: {Name:mk52111c82fbde66c4e34cc30d4a31cfff066efd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 14:52:51.684631   64875 certs.go:381] copying /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/apiserver.crt.c5ef93ea -> /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/apiserver.crt
	I1014 14:52:51.684726   64875 certs.go:385] copying /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/apiserver.key.c5ef93ea -> /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/apiserver.key
	I1014 14:52:51.684798   64875 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/proxy-client.key
	I1014 14:52:51.684816   64875 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/proxy-client.crt with IP's: []
	I1014 14:52:51.899472   64875 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/proxy-client.crt ...
	I1014 14:52:51.899505   64875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/proxy-client.crt: {Name:mk011076fb712007b54ea68011d0a41340557eaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 14:52:51.899709   64875 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/proxy-client.key ...
	I1014 14:52:51.899731   64875 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/proxy-client.key: {Name:mk9b0f3fd3c3a39a1568aa657b540f098fe6cd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 14:52:51.899935   64875 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 14:52:51.899974   64875 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 14:52:51.899984   64875 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 14:52:51.900005   64875 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 14:52:51.900028   64875 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 14:52:51.900049   64875 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 14:52:51.900092   64875 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 14:52:51.900728   64875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 14:52:51.940008   64875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 14:52:51.967223   64875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 14:52:51.998922   64875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 14:52:52.045863   64875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1014 14:52:52.090733   64875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 14:52:52.128522   64875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 14:52:52.157938   64875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 14:52:52.188131   64875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 14:52:52.216933   64875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 14:52:52.245337   64875 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 14:52:52.275449   64875 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 14:52:52.296553   64875 ssh_runner.go:195] Run: openssl version
	I1014 14:52:52.304951   64875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 14:52:52.317442   64875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 14:52:52.323901   64875 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 14:52:52.323968   64875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 14:52:52.332398   64875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 14:52:52.347329   64875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 14:52:52.362915   64875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 14:52:52.369330   64875 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 14:52:52.369391   64875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 14:52:52.377360   64875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 14:52:52.393991   64875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 14:52:52.408874   64875 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 14:52:52.414826   64875 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 14:52:52.414900   64875 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 14:52:52.421067   64875 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 14:52:52.432119   64875 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 14:52:52.437659   64875 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 14:52:52.437720   64875 kubeadm.go:392] StartCluster: {Name:old-k8s-version-399767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-399767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.138 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 14:52:52.437815   64875 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 14:52:52.437867   64875 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 14:52:52.483912   64875 cri.go:89] found id: ""
	I1014 14:52:52.483999   64875 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 14:52:52.497871   64875 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 14:52:52.513389   64875 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 14:52:52.527106   64875 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 14:52:52.527126   64875 kubeadm.go:157] found existing configuration files:
	
	I1014 14:52:52.527177   64875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 14:52:52.537220   64875 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 14:52:52.537276   64875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 14:52:52.549226   64875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 14:52:52.558650   64875 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 14:52:52.558724   64875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 14:52:52.572267   64875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 14:52:52.585467   64875 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 14:52:52.585536   64875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 14:52:52.598511   64875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 14:52:52.610794   64875 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 14:52:52.610862   64875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 14:52:52.624074   64875 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 14:52:52.774210   64875 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1014 14:52:52.774344   64875 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 14:52:53.034519   64875 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 14:52:53.034715   64875 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 14:52:53.034867   64875 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1014 14:52:53.240443   64875 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 14:52:53.245738   64875 out.go:235]   - Generating certificates and keys ...
	I1014 14:52:53.245852   64875 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 14:52:53.245947   64875 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 14:52:53.480634   64875 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 14:52:53.589596   64875 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1014 14:52:53.791850   64875 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1014 14:52:53.968080   64875 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1014 14:52:54.273419   64875 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1014 14:52:54.273655   64875 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-399767] and IPs [192.168.72.138 127.0.0.1 ::1]
	I1014 14:52:54.378394   64875 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1014 14:52:54.378625   64875 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-399767] and IPs [192.168.72.138 127.0.0.1 ::1]
	I1014 14:52:54.513004   64875 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 14:52:54.714355   64875 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 14:52:55.033676   64875 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1014 14:52:55.033772   64875 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 14:52:55.345282   64875 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 14:52:55.538775   64875 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 14:52:55.668959   64875 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 14:52:55.802112   64875 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 14:52:55.829091   64875 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 14:52:55.829771   64875 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 14:52:55.829835   64875 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 14:52:56.001629   64875 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 14:52:56.003828   64875 out.go:235]   - Booting up control plane ...
	I1014 14:52:56.003958   64875 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 14:52:56.017328   64875 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 14:52:56.018461   64875 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 14:52:56.019468   64875 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 14:52:56.024238   64875 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1014 14:53:36.026475   64875 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1014 14:53:36.026873   64875 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 14:53:36.027144   64875 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 14:53:41.028083   64875 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 14:53:41.028355   64875 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 14:53:51.033479   64875 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 14:53:51.033801   64875 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 14:54:11.032749   64875 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 14:54:11.033055   64875 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 14:54:51.032888   64875 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 14:54:51.033149   64875 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 14:54:51.033184   64875 kubeadm.go:310] 
	I1014 14:54:51.033239   64875 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1014 14:54:51.033290   64875 kubeadm.go:310] 		timed out waiting for the condition
	I1014 14:54:51.033303   64875 kubeadm.go:310] 
	I1014 14:54:51.033349   64875 kubeadm.go:310] 	This error is likely caused by:
	I1014 14:54:51.033406   64875 kubeadm.go:310] 		- The kubelet is not running
	I1014 14:54:51.033541   64875 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1014 14:54:51.033548   64875 kubeadm.go:310] 
	I1014 14:54:51.033684   64875 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1014 14:54:51.033811   64875 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1014 14:54:51.033871   64875 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1014 14:54:51.033882   64875 kubeadm.go:310] 
	I1014 14:54:51.034034   64875 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1014 14:54:51.034173   64875 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 14:54:51.034190   64875 kubeadm.go:310] 
	I1014 14:54:51.034323   64875 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1014 14:54:51.034448   64875 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 14:54:51.034566   64875 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1014 14:54:51.034670   64875 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1014 14:54:51.034704   64875 kubeadm.go:310] 
	I1014 14:54:51.034871   64875 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 14:54:51.035004   64875 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1014 14:54:51.035135   64875 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1014 14:54:51.035197   64875 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-399767] and IPs [192.168.72.138 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-399767] and IPs [192.168.72.138 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-399767] and IPs [192.168.72.138 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-399767] and IPs [192.168.72.138 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1014 14:54:51.035238   64875 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 14:54:53.401015   64875 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.365749103s)
	I1014 14:54:53.401108   64875 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 14:54:53.415579   64875 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 14:54:53.427309   64875 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 14:54:53.427334   64875 kubeadm.go:157] found existing configuration files:
	
	I1014 14:54:53.427384   64875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 14:54:53.437584   64875 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 14:54:53.437640   64875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 14:54:53.448096   64875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 14:54:53.457849   64875 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 14:54:53.457908   64875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 14:54:53.468619   64875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 14:54:53.478393   64875 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 14:54:53.478469   64875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 14:54:53.488315   64875 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 14:54:53.497793   64875 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 14:54:53.497849   64875 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 14:54:53.509032   64875 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 14:54:53.749616   64875 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 14:56:50.359315   64875 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1014 14:56:50.359408   64875 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1014 14:56:50.361102   64875 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1014 14:56:50.361147   64875 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 14:56:50.361233   64875 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 14:56:50.361310   64875 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 14:56:50.361386   64875 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1014 14:56:50.361440   64875 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 14:56:50.363575   64875 out.go:235]   - Generating certificates and keys ...
	I1014 14:56:50.363652   64875 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 14:56:50.363725   64875 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 14:56:50.363811   64875 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 14:56:50.363871   64875 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1014 14:56:50.363950   64875 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 14:56:50.364022   64875 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1014 14:56:50.364106   64875 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1014 14:56:50.364209   64875 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1014 14:56:50.364293   64875 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 14:56:50.364365   64875 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 14:56:50.364403   64875 kubeadm.go:310] [certs] Using the existing "sa" key
	I1014 14:56:50.364449   64875 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 14:56:50.364539   64875 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 14:56:50.364641   64875 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 14:56:50.364740   64875 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 14:56:50.364818   64875 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 14:56:50.364919   64875 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 14:56:50.365001   64875 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 14:56:50.365065   64875 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 14:56:50.365163   64875 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 14:56:50.366880   64875 out.go:235]   - Booting up control plane ...
	I1014 14:56:50.366973   64875 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 14:56:50.367040   64875 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 14:56:50.367112   64875 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 14:56:50.367227   64875 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 14:56:50.367412   64875 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1014 14:56:50.367472   64875 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1014 14:56:50.367534   64875 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 14:56:50.367703   64875 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 14:56:50.367805   64875 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 14:56:50.368001   64875 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 14:56:50.368097   64875 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 14:56:50.368282   64875 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 14:56:50.368377   64875 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 14:56:50.368546   64875 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 14:56:50.368613   64875 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 14:56:50.368776   64875 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 14:56:50.368784   64875 kubeadm.go:310] 
	I1014 14:56:50.368838   64875 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1014 14:56:50.368975   64875 kubeadm.go:310] 		timed out waiting for the condition
	I1014 14:56:50.368995   64875 kubeadm.go:310] 
	I1014 14:56:50.369044   64875 kubeadm.go:310] 	This error is likely caused by:
	I1014 14:56:50.369094   64875 kubeadm.go:310] 		- The kubelet is not running
	I1014 14:56:50.369216   64875 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1014 14:56:50.369227   64875 kubeadm.go:310] 
	I1014 14:56:50.369367   64875 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1014 14:56:50.369423   64875 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1014 14:56:50.369464   64875 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1014 14:56:50.369473   64875 kubeadm.go:310] 
	I1014 14:56:50.369611   64875 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1014 14:56:50.369715   64875 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 14:56:50.369725   64875 kubeadm.go:310] 
	I1014 14:56:50.369884   64875 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1014 14:56:50.369999   64875 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 14:56:50.370102   64875 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1014 14:56:50.370203   64875 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1014 14:56:50.370273   64875 kubeadm.go:310] 
	I1014 14:56:50.370275   64875 kubeadm.go:394] duration metric: took 3m57.932560469s to StartCluster
	I1014 14:56:50.370368   64875 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 14:56:50.370431   64875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 14:56:50.416904   64875 cri.go:89] found id: ""
	I1014 14:56:50.416931   64875 logs.go:282] 0 containers: []
	W1014 14:56:50.416939   64875 logs.go:284] No container was found matching "kube-apiserver"
	I1014 14:56:50.416944   64875 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 14:56:50.417002   64875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 14:56:50.451870   64875 cri.go:89] found id: ""
	I1014 14:56:50.451893   64875 logs.go:282] 0 containers: []
	W1014 14:56:50.451900   64875 logs.go:284] No container was found matching "etcd"
	I1014 14:56:50.451905   64875 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 14:56:50.451965   64875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 14:56:50.485601   64875 cri.go:89] found id: ""
	I1014 14:56:50.485629   64875 logs.go:282] 0 containers: []
	W1014 14:56:50.485637   64875 logs.go:284] No container was found matching "coredns"
	I1014 14:56:50.485643   64875 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 14:56:50.485695   64875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 14:56:50.522038   64875 cri.go:89] found id: ""
	I1014 14:56:50.522063   64875 logs.go:282] 0 containers: []
	W1014 14:56:50.522071   64875 logs.go:284] No container was found matching "kube-scheduler"
	I1014 14:56:50.522077   64875 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 14:56:50.522123   64875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 14:56:50.559149   64875 cri.go:89] found id: ""
	I1014 14:56:50.559181   64875 logs.go:282] 0 containers: []
	W1014 14:56:50.559188   64875 logs.go:284] No container was found matching "kube-proxy"
	I1014 14:56:50.559194   64875 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 14:56:50.559262   64875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 14:56:50.598533   64875 cri.go:89] found id: ""
	I1014 14:56:50.598560   64875 logs.go:282] 0 containers: []
	W1014 14:56:50.598568   64875 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 14:56:50.598575   64875 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 14:56:50.598689   64875 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 14:56:50.633130   64875 cri.go:89] found id: ""
	I1014 14:56:50.633163   64875 logs.go:282] 0 containers: []
	W1014 14:56:50.633175   64875 logs.go:284] No container was found matching "kindnet"
	I1014 14:56:50.633187   64875 logs.go:123] Gathering logs for kubelet ...
	I1014 14:56:50.633202   64875 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 14:56:50.683294   64875 logs.go:123] Gathering logs for dmesg ...
	I1014 14:56:50.683329   64875 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 14:56:50.696610   64875 logs.go:123] Gathering logs for describe nodes ...
	I1014 14:56:50.696644   64875 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 14:56:50.842628   64875 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 14:56:50.842654   64875 logs.go:123] Gathering logs for CRI-O ...
	I1014 14:56:50.842672   64875 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 14:56:50.964279   64875 logs.go:123] Gathering logs for container status ...
	I1014 14:56:50.964318   64875 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1014 14:56:51.002487   64875 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1014 14:56:51.002539   64875 out.go:270] * 
	* 
	W1014 14:56:51.002591   64875 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 14:56:51.002629   64875 out.go:270] * 
	* 
	W1014 14:56:51.003514   64875 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 14:56:51.007011   64875 out.go:201] 
	W1014 14:56:51.008120   64875 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 14:56:51.008167   64875 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1014 14:56:51.008198   64875 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1014 14:56:51.009688   64875 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-399767 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-399767 -n old-k8s-version-399767
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-399767 -n old-k8s-version-399767: exit status 6 (229.054524ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 14:56:51.283486   71770 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-399767" does not appear in /home/jenkins/minikube-integration/19790-7836/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-399767" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (301.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-813300 --alsologtostderr -v=3
E1014 14:54:27.794932   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/auto-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:54:27.801409   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/auto-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:54:27.812898   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/auto-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:54:27.835046   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/auto-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:54:27.876845   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/auto-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:54:27.958310   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/auto-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:54:28.119786   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/auto-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:54:28.441053   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/auto-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:54:29.082905   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/auto-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:54:30.364549   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/auto-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:54:32.926089   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/auto-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:54:38.047520   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/auto-517678/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-813300 --alsologtostderr -v=3: exit status 82 (2m0.524728102s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-813300"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 14:54:03.962112   70646 out.go:345] Setting OutFile to fd 1 ...
	I1014 14:54:03.962289   70646 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:54:03.962301   70646 out.go:358] Setting ErrFile to fd 2...
	I1014 14:54:03.962308   70646 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:54:03.962500   70646 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
	I1014 14:54:03.962792   70646 out.go:352] Setting JSON to false
	I1014 14:54:03.962936   70646 mustload.go:65] Loading cluster: no-preload-813300
	I1014 14:54:03.963393   70646 config.go:182] Loaded profile config "no-preload-813300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 14:54:03.963480   70646 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/config.json ...
	I1014 14:54:03.963669   70646 mustload.go:65] Loading cluster: no-preload-813300
	I1014 14:54:03.963806   70646 config.go:182] Loaded profile config "no-preload-813300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 14:54:03.963833   70646 stop.go:39] StopHost: no-preload-813300
	I1014 14:54:03.964263   70646 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:54:03.964313   70646 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:54:03.979811   70646 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43633
	I1014 14:54:03.980294   70646 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:54:03.980884   70646 main.go:141] libmachine: Using API Version  1
	I1014 14:54:03.980909   70646 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:54:03.981234   70646 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:54:03.983587   70646 out.go:177] * Stopping node "no-preload-813300"  ...
	I1014 14:54:03.984787   70646 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1014 14:54:03.984812   70646 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 14:54:03.985027   70646 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1014 14:54:03.985053   70646 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 14:54:03.988208   70646 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 14:54:03.988677   70646 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 15:53:00 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 14:54:03.988712   70646 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 14:54:03.988883   70646 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 14:54:03.989046   70646 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 14:54:03.989213   70646 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 14:54:03.989401   70646 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 14:54:04.112728   70646 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1014 14:54:04.175021   70646 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1014 14:54:04.230796   70646 main.go:141] libmachine: Stopping "no-preload-813300"...
	I1014 14:54:04.230853   70646 main.go:141] libmachine: (no-preload-813300) Calling .GetState
	I1014 14:54:04.232746   70646 main.go:141] libmachine: (no-preload-813300) Calling .Stop
	I1014 14:54:04.236778   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 0/120
	I1014 14:54:05.238355   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 1/120
	I1014 14:54:06.239722   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 2/120
	I1014 14:54:07.241190   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 3/120
	I1014 14:54:08.242508   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 4/120
	I1014 14:54:09.244471   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 5/120
	I1014 14:54:10.245775   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 6/120
	I1014 14:54:11.247255   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 7/120
	I1014 14:54:12.249269   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 8/120
	I1014 14:54:13.250941   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 9/120
	I1014 14:54:14.252479   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 10/120
	I1014 14:54:15.253993   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 11/120
	I1014 14:54:16.255804   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 12/120
	I1014 14:54:17.257249   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 13/120
	I1014 14:54:18.258565   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 14/120
	I1014 14:54:19.260344   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 15/120
	I1014 14:54:20.262067   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 16/120
	I1014 14:54:21.263709   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 17/120
	I1014 14:54:22.265257   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 18/120
	I1014 14:54:23.266670   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 19/120
	I1014 14:54:24.269018   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 20/120
	I1014 14:54:25.271486   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 21/120
	I1014 14:54:26.273412   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 22/120
	I1014 14:54:27.274984   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 23/120
	I1014 14:54:28.276419   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 24/120
	I1014 14:54:29.277907   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 25/120
	I1014 14:54:30.279112   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 26/120
	I1014 14:54:31.281579   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 27/120
	I1014 14:54:32.283107   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 28/120
	I1014 14:54:33.285136   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 29/120
	I1014 14:54:34.286275   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 30/120
	I1014 14:54:35.287688   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 31/120
	I1014 14:54:36.288915   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 32/120
	I1014 14:54:37.290455   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 33/120
	I1014 14:54:38.291854   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 34/120
	I1014 14:54:39.293645   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 35/120
	I1014 14:54:40.294968   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 36/120
	I1014 14:54:41.296829   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 37/120
	I1014 14:54:42.298280   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 38/120
	I1014 14:54:43.300273   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 39/120
	I1014 14:54:44.302498   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 40/120
	I1014 14:54:45.303811   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 41/120
	I1014 14:54:46.305455   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 42/120
	I1014 14:54:47.306842   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 43/120
	I1014 14:54:48.309278   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 44/120
	I1014 14:54:49.311428   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 45/120
	I1014 14:54:50.313643   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 46/120
	I1014 14:54:51.315029   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 47/120
	I1014 14:54:52.317107   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 48/120
	I1014 14:54:53.318646   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 49/120
	I1014 14:54:54.321088   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 50/120
	I1014 14:54:55.322814   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 51/120
	I1014 14:54:56.324984   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 52/120
	I1014 14:54:57.326451   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 53/120
	I1014 14:54:58.327851   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 54/120
	I1014 14:54:59.330059   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 55/120
	I1014 14:55:00.331629   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 56/120
	I1014 14:55:01.332890   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 57/120
	I1014 14:55:02.334363   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 58/120
	I1014 14:55:03.336094   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 59/120
	I1014 14:55:04.338324   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 60/120
	I1014 14:55:05.339761   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 61/120
	I1014 14:55:06.341268   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 62/120
	I1014 14:55:07.342584   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 63/120
	I1014 14:55:08.344644   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 64/120
	I1014 14:55:09.346648   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 65/120
	I1014 14:55:10.348295   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 66/120
	I1014 14:55:11.349710   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 67/120
	I1014 14:55:12.351242   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 68/120
	I1014 14:55:13.352613   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 69/120
	I1014 14:55:14.354807   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 70/120
	I1014 14:55:15.357384   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 71/120
	I1014 14:55:16.358853   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 72/120
	I1014 14:55:17.361182   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 73/120
	I1014 14:55:18.362699   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 74/120
	I1014 14:55:19.364524   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 75/120
	I1014 14:55:20.365887   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 76/120
	I1014 14:55:21.367374   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 77/120
	I1014 14:55:22.368655   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 78/120
	I1014 14:55:23.370015   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 79/120
	I1014 14:55:24.372023   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 80/120
	I1014 14:55:25.373371   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 81/120
	I1014 14:55:26.374870   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 82/120
	I1014 14:55:27.376084   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 83/120
	I1014 14:55:28.377531   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 84/120
	I1014 14:55:29.379426   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 85/120
	I1014 14:55:30.381058   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 86/120
	I1014 14:55:31.382518   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 87/120
	I1014 14:55:32.383896   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 88/120
	I1014 14:55:33.385269   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 89/120
	I1014 14:55:34.386668   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 90/120
	I1014 14:55:35.387953   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 91/120
	I1014 14:55:36.389129   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 92/120
	I1014 14:55:37.390629   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 93/120
	I1014 14:55:38.391832   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 94/120
	I1014 14:55:39.393716   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 95/120
	I1014 14:55:40.395138   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 96/120
	I1014 14:55:41.396315   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 97/120
	I1014 14:55:42.397861   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 98/120
	I1014 14:55:43.399141   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 99/120
	I1014 14:55:44.401339   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 100/120
	I1014 14:55:45.402678   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 101/120
	I1014 14:55:46.403843   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 102/120
	I1014 14:55:47.405257   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 103/120
	I1014 14:55:48.406458   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 104/120
	I1014 14:55:49.408413   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 105/120
	I1014 14:55:50.409816   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 106/120
	I1014 14:55:51.411036   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 107/120
	I1014 14:55:52.412458   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 108/120
	I1014 14:55:53.413734   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 109/120
	I1014 14:55:54.415824   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 110/120
	I1014 14:55:55.417184   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 111/120
	I1014 14:55:56.418527   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 112/120
	I1014 14:55:57.419877   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 113/120
	I1014 14:55:58.421320   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 114/120
	I1014 14:55:59.423396   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 115/120
	I1014 14:56:00.425159   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 116/120
	I1014 14:56:01.426891   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 117/120
	I1014 14:56:02.428366   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 118/120
	I1014 14:56:03.429783   70646 main.go:141] libmachine: (no-preload-813300) Waiting for machine to stop 119/120
	I1014 14:56:04.430986   70646 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1014 14:56:04.431063   70646 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1014 14:56:04.432991   70646 out.go:201] 
	W1014 14:56:04.434311   70646 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1014 14:56:04.434327   70646 out.go:270] * 
	* 
	W1014 14:56:04.437937   70646 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 14:56:04.439309   70646 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-813300 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-813300 -n no-preload-813300
E1014 14:56:06.400582   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:56:07.153019   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/calico-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:56:07.159468   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/calico-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:56:07.170846   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/calico-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:56:07.192294   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/calico-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:56:07.233695   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/calico-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:56:07.314952   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/calico-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:56:07.476624   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/calico-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:56:07.798586   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/calico-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:56:08.440805   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/calico-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:56:09.722314   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/calico-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:56:10.435793   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kindnet-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:56:12.284267   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/calico-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:56:17.406449   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/calico-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:56:22.163051   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/custom-flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:56:22.169395   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/custom-flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:56:22.180735   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/custom-flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:56:22.202091   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/custom-flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:56:22.243464   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/custom-flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:56:22.324926   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/custom-flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:56:22.486178   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/custom-flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:56:22.807787   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/custom-flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-813300 -n no-preload-813300: exit status 3 (18.654598068s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 14:56:23.094984   71452 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.13:22: connect: no route to host
	E1014 14:56:23.095007   71452 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.13:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-813300" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (138.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-989166 --alsologtostderr -v=3
E1014 14:55:08.771129   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/auto-517678/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-989166 --alsologtostderr -v=3: exit status 82 (2m0.524742249s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-989166"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 14:54:57.922715   71050 out.go:345] Setting OutFile to fd 1 ...
	I1014 14:54:57.922853   71050 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:54:57.922864   71050 out.go:358] Setting ErrFile to fd 2...
	I1014 14:54:57.922870   71050 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:54:57.923041   71050 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
	I1014 14:54:57.923278   71050 out.go:352] Setting JSON to false
	I1014 14:54:57.923354   71050 mustload.go:65] Loading cluster: embed-certs-989166
	I1014 14:54:57.923691   71050 config.go:182] Loaded profile config "embed-certs-989166": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 14:54:57.923760   71050 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/config.json ...
	I1014 14:54:57.923936   71050 mustload.go:65] Loading cluster: embed-certs-989166
	I1014 14:54:57.924032   71050 config.go:182] Loaded profile config "embed-certs-989166": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 14:54:57.924054   71050 stop.go:39] StopHost: embed-certs-989166
	I1014 14:54:57.924425   71050 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:54:57.924474   71050 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:54:57.939338   71050 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38549
	I1014 14:54:57.939787   71050 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:54:57.940333   71050 main.go:141] libmachine: Using API Version  1
	I1014 14:54:57.940357   71050 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:54:57.940900   71050 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:54:57.943423   71050 out.go:177] * Stopping node "embed-certs-989166"  ...
	I1014 14:54:57.944997   71050 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1014 14:54:57.945025   71050 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 14:54:57.945282   71050 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1014 14:54:57.945321   71050 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 14:54:57.948177   71050 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 14:54:57.948662   71050 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 15:53:29 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 14:54:57.948714   71050 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 14:54:57.948826   71050 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 14:54:57.948997   71050 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 14:54:57.949117   71050 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 14:54:57.949262   71050 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 14:54:58.072295   71050 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1014 14:54:58.135476   71050 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1014 14:54:58.196827   71050 main.go:141] libmachine: Stopping "embed-certs-989166"...
	I1014 14:54:58.196857   71050 main.go:141] libmachine: (embed-certs-989166) Calling .GetState
	I1014 14:54:58.198390   71050 main.go:141] libmachine: (embed-certs-989166) Calling .Stop
	I1014 14:54:58.201745   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 0/120
	I1014 14:54:59.203460   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 1/120
	I1014 14:55:00.204716   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 2/120
	I1014 14:55:01.206282   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 3/120
	I1014 14:55:02.207670   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 4/120
	I1014 14:55:03.209797   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 5/120
	I1014 14:55:04.211272   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 6/120
	I1014 14:55:05.212552   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 7/120
	I1014 14:55:06.213920   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 8/120
	I1014 14:55:07.215361   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 9/120
	I1014 14:55:08.216924   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 10/120
	I1014 14:55:09.218293   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 11/120
	I1014 14:55:10.219820   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 12/120
	I1014 14:55:11.221183   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 13/120
	I1014 14:55:12.222685   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 14/120
	I1014 14:55:13.224707   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 15/120
	I1014 14:55:14.226117   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 16/120
	I1014 14:55:15.228496   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 17/120
	I1014 14:55:16.229999   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 18/120
	I1014 14:55:17.231654   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 19/120
	I1014 14:55:18.234226   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 20/120
	I1014 14:55:19.235966   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 21/120
	I1014 14:55:20.237545   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 22/120
	I1014 14:55:21.239175   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 23/120
	I1014 14:55:22.240528   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 24/120
	I1014 14:55:23.242643   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 25/120
	I1014 14:55:24.243540   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 26/120
	I1014 14:55:25.245668   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 27/120
	I1014 14:55:26.246962   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 28/120
	I1014 14:55:27.248426   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 29/120
	I1014 14:55:28.250782   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 30/120
	I1014 14:55:29.252052   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 31/120
	I1014 14:55:30.253551   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 32/120
	I1014 14:55:31.255064   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 33/120
	I1014 14:55:32.256422   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 34/120
	I1014 14:55:33.257891   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 35/120
	I1014 14:55:34.259287   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 36/120
	I1014 14:55:35.261030   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 37/120
	I1014 14:55:36.262472   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 38/120
	I1014 14:55:37.263997   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 39/120
	I1014 14:55:38.266157   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 40/120
	I1014 14:55:39.267548   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 41/120
	I1014 14:55:40.268867   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 42/120
	I1014 14:55:41.270300   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 43/120
	I1014 14:55:42.271685   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 44/120
	I1014 14:55:43.273193   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 45/120
	I1014 14:55:44.274563   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 46/120
	I1014 14:55:45.276061   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 47/120
	I1014 14:55:46.277515   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 48/120
	I1014 14:55:47.279008   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 49/120
	I1014 14:55:48.281353   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 50/120
	I1014 14:55:49.282762   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 51/120
	I1014 14:55:50.284153   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 52/120
	I1014 14:55:51.285738   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 53/120
	I1014 14:55:52.287019   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 54/120
	I1014 14:55:53.289148   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 55/120
	I1014 14:55:54.290324   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 56/120
	I1014 14:55:55.291760   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 57/120
	I1014 14:55:56.293466   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 58/120
	I1014 14:55:57.294863   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 59/120
	I1014 14:55:58.297129   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 60/120
	I1014 14:55:59.298589   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 61/120
	I1014 14:56:00.300061   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 62/120
	I1014 14:56:01.301482   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 63/120
	I1014 14:56:02.302867   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 64/120
	I1014 14:56:03.305043   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 65/120
	I1014 14:56:04.306711   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 66/120
	I1014 14:56:05.307929   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 67/120
	I1014 14:56:06.309279   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 68/120
	I1014 14:56:07.310749   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 69/120
	I1014 14:56:08.312943   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 70/120
	I1014 14:56:09.314402   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 71/120
	I1014 14:56:10.315725   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 72/120
	I1014 14:56:11.317098   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 73/120
	I1014 14:56:12.318472   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 74/120
	I1014 14:56:13.320519   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 75/120
	I1014 14:56:14.321950   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 76/120
	I1014 14:56:15.323361   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 77/120
	I1014 14:56:16.325254   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 78/120
	I1014 14:56:17.326667   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 79/120
	I1014 14:56:18.328751   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 80/120
	I1014 14:56:19.330060   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 81/120
	I1014 14:56:20.331329   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 82/120
	I1014 14:56:21.332693   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 83/120
	I1014 14:56:22.333846   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 84/120
	I1014 14:56:23.335758   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 85/120
	I1014 14:56:24.337054   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 86/120
	I1014 14:56:25.338369   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 87/120
	I1014 14:56:26.339305   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 88/120
	I1014 14:56:27.340653   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 89/120
	I1014 14:56:28.342865   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 90/120
	I1014 14:56:29.344920   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 91/120
	I1014 14:56:30.346282   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 92/120
	I1014 14:56:31.347682   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 93/120
	I1014 14:56:32.349196   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 94/120
	I1014 14:56:33.351512   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 95/120
	I1014 14:56:34.353137   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 96/120
	I1014 14:56:35.354863   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 97/120
	I1014 14:56:36.356617   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 98/120
	I1014 14:56:37.358086   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 99/120
	I1014 14:56:38.360326   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 100/120
	I1014 14:56:39.361989   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 101/120
	I1014 14:56:40.363828   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 102/120
	I1014 14:56:41.365486   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 103/120
	I1014 14:56:42.367006   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 104/120
	I1014 14:56:43.369165   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 105/120
	I1014 14:56:44.371025   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 106/120
	I1014 14:56:45.372601   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 107/120
	I1014 14:56:46.374003   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 108/120
	I1014 14:56:47.375812   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 109/120
	I1014 14:56:48.378307   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 110/120
	I1014 14:56:49.379742   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 111/120
	I1014 14:56:50.381215   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 112/120
	I1014 14:56:51.383433   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 113/120
	I1014 14:56:52.384503   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 114/120
	I1014 14:56:53.385774   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 115/120
	I1014 14:56:54.387247   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 116/120
	I1014 14:56:55.388909   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 117/120
	I1014 14:56:56.390455   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 118/120
	I1014 14:56:57.391976   71050 main.go:141] libmachine: (embed-certs-989166) Waiting for machine to stop 119/120
	I1014 14:56:58.392679   71050 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1014 14:56:58.392737   71050 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1014 14:56:58.394843   71050 out.go:201] 
	W1014 14:56:58.396173   71050 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1014 14:56:58.396186   71050 out.go:270] * 
	* 
	W1014 14:56:58.399206   71050 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 14:56:58.400564   71050 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-989166 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-989166 -n embed-certs-989166
E1014 14:57:01.835371   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/enable-default-cni-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:57:01.842666   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/enable-default-cni-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:57:01.854138   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/enable-default-cni-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:57:01.875527   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/enable-default-cni-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:57:01.916971   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/enable-default-cni-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:57:01.998466   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/enable-default-cni-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:57:02.160040   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/enable-default-cni-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:57:02.481354   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/enable-default-cni-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:57:03.123467   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/enable-default-cni-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:57:03.137884   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/custom-flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:57:04.405276   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/enable-default-cni-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:57:06.966921   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/enable-default-cni-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:57:11.655025   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/auto-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:57:12.089029   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/enable-default-cni-517678/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-989166 -n embed-certs-989166: exit status 3 (18.453142963s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 14:57:16.854931   71918 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.41:22: connect: no route to host
	E1014 14:57:16.854949   71918 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.41:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-989166" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (138.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-201291 --alsologtostderr -v=3
E1014 14:55:29.459928   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kindnet-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:55:29.466355   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kindnet-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:55:29.477712   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kindnet-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:55:29.499051   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kindnet-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:55:29.540425   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kindnet-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:55:29.621865   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kindnet-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:55:29.783379   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kindnet-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:55:30.105163   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kindnet-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:55:30.746642   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kindnet-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:55:32.028596   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kindnet-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:55:34.590095   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kindnet-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:55:39.712344   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kindnet-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:55:49.733319   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/auto-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:55:49.953880   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kindnet-517678/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-201291 --alsologtostderr -v=3: exit status 82 (2m0.474840151s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-201291"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 14:55:25.014445   71235 out.go:345] Setting OutFile to fd 1 ...
	I1014 14:55:25.014779   71235 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:55:25.014792   71235 out.go:358] Setting ErrFile to fd 2...
	I1014 14:55:25.014796   71235 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:55:25.015010   71235 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
	I1014 14:55:25.015242   71235 out.go:352] Setting JSON to false
	I1014 14:55:25.015312   71235 mustload.go:65] Loading cluster: default-k8s-diff-port-201291
	I1014 14:55:25.015668   71235 config.go:182] Loaded profile config "default-k8s-diff-port-201291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 14:55:25.015731   71235 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/config.json ...
	I1014 14:55:25.015889   71235 mustload.go:65] Loading cluster: default-k8s-diff-port-201291
	I1014 14:55:25.015984   71235 config.go:182] Loaded profile config "default-k8s-diff-port-201291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 14:55:25.016005   71235 stop.go:39] StopHost: default-k8s-diff-port-201291
	I1014 14:55:25.016360   71235 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:55:25.016399   71235 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:55:25.030900   71235 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35839
	I1014 14:55:25.031471   71235 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:55:25.032123   71235 main.go:141] libmachine: Using API Version  1
	I1014 14:55:25.032146   71235 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:55:25.032581   71235 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:55:25.035169   71235 out.go:177] * Stopping node "default-k8s-diff-port-201291"  ...
	I1014 14:55:25.036436   71235 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1014 14:55:25.036457   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 14:55:25.036677   71235 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1014 14:55:25.036704   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 14:55:25.039833   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 14:55:25.040264   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 15:53:58 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 14:55:25.040290   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 14:55:25.040429   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 14:55:25.040597   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 14:55:25.040756   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 14:55:25.040952   71235 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 14:55:25.157032   71235 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1014 14:55:25.212513   71235 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1014 14:55:25.251557   71235 main.go:141] libmachine: Stopping "default-k8s-diff-port-201291"...
	I1014 14:55:25.251594   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetState
	I1014 14:55:25.253117   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Stop
	I1014 14:55:25.256715   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 0/120
	I1014 14:55:26.257850   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 1/120
	I1014 14:55:27.259067   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 2/120
	I1014 14:55:28.260841   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 3/120
	I1014 14:55:29.262023   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 4/120
	I1014 14:55:30.263646   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 5/120
	I1014 14:55:31.265123   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 6/120
	I1014 14:55:32.266254   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 7/120
	I1014 14:55:33.267590   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 8/120
	I1014 14:55:34.268849   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 9/120
	I1014 14:55:35.271055   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 10/120
	I1014 14:55:36.272305   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 11/120
	I1014 14:55:37.273496   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 12/120
	I1014 14:55:38.274588   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 13/120
	I1014 14:55:39.276088   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 14/120
	I1014 14:55:40.277647   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 15/120
	I1014 14:55:41.278893   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 16/120
	I1014 14:55:42.281121   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 17/120
	I1014 14:55:43.282358   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 18/120
	I1014 14:55:44.283658   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 19/120
	I1014 14:55:45.285452   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 20/120
	I1014 14:55:46.286706   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 21/120
	I1014 14:55:47.287945   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 22/120
	I1014 14:55:48.289128   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 23/120
	I1014 14:55:49.290401   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 24/120
	I1014 14:55:50.291969   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 25/120
	I1014 14:55:51.293144   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 26/120
	I1014 14:55:52.294241   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 27/120
	I1014 14:55:53.295398   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 28/120
	I1014 14:55:54.296444   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 29/120
	I1014 14:55:55.298338   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 30/120
	I1014 14:55:56.299548   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 31/120
	I1014 14:55:57.300971   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 32/120
	I1014 14:55:58.302325   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 33/120
	I1014 14:55:59.303522   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 34/120
	I1014 14:56:00.305642   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 35/120
	I1014 14:56:01.306946   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 36/120
	I1014 14:56:02.308916   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 37/120
	I1014 14:56:03.310057   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 38/120
	I1014 14:56:04.311196   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 39/120
	I1014 14:56:05.313172   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 40/120
	I1014 14:56:06.314170   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 41/120
	I1014 14:56:07.316186   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 42/120
	I1014 14:56:08.317371   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 43/120
	I1014 14:56:09.318494   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 44/120
	I1014 14:56:10.320449   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 45/120
	I1014 14:56:11.321667   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 46/120
	I1014 14:56:12.322722   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 47/120
	I1014 14:56:13.324878   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 48/120
	I1014 14:56:14.325903   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 49/120
	I1014 14:56:15.327122   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 50/120
	I1014 14:56:16.328839   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 51/120
	I1014 14:56:17.330045   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 52/120
	I1014 14:56:18.330930   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 53/120
	I1014 14:56:19.332836   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 54/120
	I1014 14:56:20.334561   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 55/120
	I1014 14:56:21.335734   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 56/120
	I1014 14:56:22.336851   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 57/120
	I1014 14:56:23.338158   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 58/120
	I1014 14:56:24.339345   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 59/120
	I1014 14:56:25.341071   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 60/120
	I1014 14:56:26.342030   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 61/120
	I1014 14:56:27.342931   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 62/120
	I1014 14:56:28.344723   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 63/120
	I1014 14:56:29.345815   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 64/120
	I1014 14:56:30.347480   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 65/120
	I1014 14:56:31.348681   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 66/120
	I1014 14:56:32.350549   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 67/120
	I1014 14:56:33.352069   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 68/120
	I1014 14:56:34.353496   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 69/120
	I1014 14:56:35.355522   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 70/120
	I1014 14:56:36.356964   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 71/120
	I1014 14:56:37.358459   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 72/120
	I1014 14:56:38.359972   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 73/120
	I1014 14:56:39.361632   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 74/120
	I1014 14:56:40.363920   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 75/120
	I1014 14:56:41.365480   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 76/120
	I1014 14:56:42.367115   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 77/120
	I1014 14:56:43.369285   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 78/120
	I1014 14:56:44.370790   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 79/120
	I1014 14:56:45.372783   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 80/120
	I1014 14:56:46.374351   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 81/120
	I1014 14:56:47.376537   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 82/120
	I1014 14:56:48.377948   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 83/120
	I1014 14:56:49.379485   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 84/120
	I1014 14:56:50.381096   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 85/120
	I1014 14:56:51.382725   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 86/120
	I1014 14:56:52.384179   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 87/120
	I1014 14:56:53.385926   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 88/120
	I1014 14:56:54.387389   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 89/120
	I1014 14:56:55.389599   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 90/120
	I1014 14:56:56.390854   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 91/120
	I1014 14:56:57.392212   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 92/120
	I1014 14:56:58.393632   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 93/120
	I1014 14:56:59.395082   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 94/120
	I1014 14:57:00.397100   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 95/120
	I1014 14:57:01.398561   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 96/120
	I1014 14:57:02.400078   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 97/120
	I1014 14:57:03.401532   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 98/120
	I1014 14:57:04.402977   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 99/120
	I1014 14:57:05.405266   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 100/120
	I1014 14:57:06.407138   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 101/120
	I1014 14:57:07.408881   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 102/120
	I1014 14:57:08.410462   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 103/120
	I1014 14:57:09.411995   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 104/120
	I1014 14:57:10.414003   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 105/120
	I1014 14:57:11.415516   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 106/120
	I1014 14:57:12.416987   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 107/120
	I1014 14:57:13.418559   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 108/120
	I1014 14:57:14.420018   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 109/120
	I1014 14:57:15.422279   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 110/120
	I1014 14:57:16.423775   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 111/120
	I1014 14:57:17.425031   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 112/120
	I1014 14:57:18.426495   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 113/120
	I1014 14:57:19.427834   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 114/120
	I1014 14:57:20.430250   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 115/120
	I1014 14:57:21.431615   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 116/120
	I1014 14:57:22.433224   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 117/120
	I1014 14:57:23.434734   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 118/120
	I1014 14:57:24.436174   71235 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for machine to stop 119/120
	I1014 14:57:25.437556   71235 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1014 14:57:25.437623   71235 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1014 14:57:25.439589   71235 out.go:201] 
	W1014 14:57:25.440715   71235 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1014 14:57:25.440729   71235 out.go:270] * 
	* 
	W1014 14:57:25.443693   71235 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 14:57:25.444848   71235 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-201291 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-201291 -n default-k8s-diff-port-201291
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-201291 -n default-k8s-diff-port-201291: exit status 3 (18.543711395s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 14:57:43.990932   72112 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.128:22: connect: no route to host
	E1014 14:57:43.990956   72112 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.128:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-201291" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-813300 -n no-preload-813300
E1014 14:56:23.449545   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/custom-flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:56:24.730945   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/custom-flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-813300 -n no-preload-813300: exit status 3 (3.16778392s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 14:56:26.262962   71549 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.13:22: connect: no route to host
	E1014 14:56:26.262989   71549 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.13:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-813300 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1014 14:56:27.292899   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/custom-flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:56:27.648568   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/calico-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:56:32.414334   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/custom-flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-813300 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154529301s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.13:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-813300 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-813300 -n no-preload-813300
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-813300 -n no-preload-813300: exit status 3 (3.061195647s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 14:56:35.478967   71631 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.13:22: connect: no route to host
	E1014 14:56:35.478986   71631 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.61.13:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-813300" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-399767 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-399767 create -f testdata/busybox.yaml: exit status 1 (45.691107ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-399767" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-399767 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-399767 -n old-k8s-version-399767
E1014 14:56:51.397744   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kindnet-517678/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-399767 -n old-k8s-version-399767: exit status 6 (228.066017ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 14:56:51.556879   71810 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-399767" does not appear in /home/jenkins/minikube-integration/19790-7836/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-399767" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-399767 -n old-k8s-version-399767
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-399767 -n old-k8s-version-399767: exit status 6 (220.675263ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 14:56:51.778560   71839 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-399767" does not appear in /home/jenkins/minikube-integration/19790-7836/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-399767" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (81.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-399767 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-399767 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m21.417723023s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-399767 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-399767 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-399767 describe deploy/metrics-server -n kube-system: exit status 1 (43.718337ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-399767" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-399767 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-399767 -n old-k8s-version-399767
E1014 14:58:13.319622   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kindnet-517678/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-399767 -n old-k8s-version-399767: exit status 6 (227.792076ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 14:58:13.467920   72502 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-399767" does not appear in /home/jenkins/minikube-integration/19790-7836/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-399767" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (81.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-989166 -n embed-certs-989166
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-989166 -n embed-certs-989166: exit status 3 (3.16810996s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 14:57:20.022936   72030 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.41:22: connect: no route to host
	E1014 14:57:20.022959   72030 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.41:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-989166 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1014 14:57:22.331282   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/enable-default-cni-517678/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-989166 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153298537s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.41:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-989166 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-989166 -n embed-certs-989166
E1014 14:57:29.093281   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/calico-517678/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-989166 -n embed-certs-989166: exit status 3 (3.062374462s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 14:57:29.238984   72142 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.41:22: connect: no route to host
	E1014 14:57:29.239021   72142 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.41:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-989166" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-201291 -n default-k8s-diff-port-201291
E1014 14:57:44.100157   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/custom-flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-201291 -n default-k8s-diff-port-201291: exit status 3 (3.167639166s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 14:57:47.158957   72263 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.128:22: connect: no route to host
	E1014 14:57:47.158980   72263 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.128:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-201291 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1014 14:57:48.492582   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:57:49.187630   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/bridge-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:57:49.193969   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/bridge-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:57:49.205310   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/bridge-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:57:49.226721   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/bridge-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:57:49.268076   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/bridge-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:57:49.349564   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/bridge-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:57:49.511187   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/bridge-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:57:49.832822   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/bridge-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:57:50.474893   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/bridge-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:57:51.756583   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/bridge-517678/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-201291 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153183577s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.128:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-201291 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-201291 -n default-k8s-diff-port-201291
E1014 14:57:54.318061   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/bridge-517678/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-201291 -n default-k8s-diff-port-201291: exit status 3 (3.062442182s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 14:57:56.374956   72344 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.128:22: connect: no route to host
	E1014 14:57:56.374979   72344 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.128:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-201291" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (733.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-399767 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E1014 14:58:19.216045   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:58:23.775666   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/enable-default-cni-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:58:30.164451   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/bridge-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:58:36.994369   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/functional-917108/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:58:51.015481   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/calico-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:59:00.177777   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:59:06.021975   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/custom-flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:59:11.126267   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/bridge-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:59:27.794560   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/auto-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:59:45.697682   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/enable-default-cni-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:59:55.497329   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/auto-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 15:00:22.099182   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 15:00:29.460079   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kindnet-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 15:00:33.048188   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/bridge-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 15:00:57.160937   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kindnet-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 15:01:06.401361   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: no such file or directory" logger="UnhandledError"
E1014 15:01:07.152422   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/calico-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 15:01:22.163069   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/custom-flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 15:01:34.857760   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/calico-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 15:01:49.863545   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/custom-flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 15:02:01.835996   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/enable-default-cni-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 15:02:29.540037   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/enable-default-cni-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 15:02:38.241013   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 15:02:49.187779   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/bridge-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 15:03:05.941343   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 15:03:16.890423   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/bridge-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 15:03:36.994328   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/functional-917108/client.crt: no such file or directory" logger="UnhandledError"
E1014 15:04:27.794801   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/auto-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 15:05:00.073651   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/functional-917108/client.crt: no such file or directory" logger="UnhandledError"
E1014 15:05:29.459676   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kindnet-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 15:06:06.400572   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: no such file or directory" logger="UnhandledError"
E1014 15:06:07.152913   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/calico-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 15:06:22.162689   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/custom-flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-399767 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m10.372990912s)

                                                
                                                
-- stdout --
	* [old-k8s-version-399767] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19790-7836/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-7836/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-399767" primary control-plane node in "old-k8s-version-399767" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-399767" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 14:58:18.000027   72639 out.go:345] Setting OutFile to fd 1 ...
	I1014 14:58:18.000165   72639 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:58:18.000176   72639 out.go:358] Setting ErrFile to fd 2...
	I1014 14:58:18.000189   72639 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:58:18.000390   72639 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
	I1014 14:58:18.000911   72639 out.go:352] Setting JSON to false
	I1014 14:58:18.001828   72639 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6048,"bootTime":1728911850,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 14:58:18.001919   72639 start.go:139] virtualization: kvm guest
	I1014 14:58:18.004056   72639 out.go:177] * [old-k8s-version-399767] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1014 14:58:18.005382   72639 notify.go:220] Checking for updates...
	I1014 14:58:18.005437   72639 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 14:58:18.006939   72639 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 14:58:18.008275   72639 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 14:58:18.009565   72639 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 14:58:18.010773   72639 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 14:58:18.011941   72639 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 14:58:18.013472   72639 config.go:182] Loaded profile config "old-k8s-version-399767": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1014 14:58:18.013833   72639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:58:18.013892   72639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:58:18.028372   72639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44497
	I1014 14:58:18.028786   72639 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:58:18.029355   72639 main.go:141] libmachine: Using API Version  1
	I1014 14:58:18.029375   72639 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:58:18.029671   72639 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:58:18.029827   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 14:58:18.031644   72639 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1014 14:58:18.033229   72639 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 14:58:18.033524   72639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:58:18.033565   72639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:58:18.048210   72639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34273
	I1014 14:58:18.048620   72639 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:58:18.049080   72639 main.go:141] libmachine: Using API Version  1
	I1014 14:58:18.049102   72639 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:58:18.049377   72639 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:58:18.049550   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 14:58:18.084664   72639 out.go:177] * Using the kvm2 driver based on existing profile
	I1014 14:58:18.085942   72639 start.go:297] selected driver: kvm2
	I1014 14:58:18.085952   72639 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-399767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-399767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.138 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 14:58:18.086042   72639 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 14:58:18.086707   72639 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 14:58:18.086795   72639 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19790-7836/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 14:58:18.101802   72639 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1014 14:58:18.102194   72639 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 14:58:18.102224   72639 cni.go:84] Creating CNI manager for ""
	I1014 14:58:18.102263   72639 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 14:58:18.102315   72639 start.go:340] cluster config:
	{Name:old-k8s-version-399767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-399767 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.138 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 14:58:18.102441   72639 iso.go:125] acquiring lock: {Name:mk2e2e780b05ead4007a93e6b56d28c6081926bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 14:58:18.105418   72639 out.go:177] * Starting "old-k8s-version-399767" primary control-plane node in "old-k8s-version-399767" cluster
	I1014 14:58:18.106656   72639 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1014 14:58:18.106696   72639 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1014 14:58:18.106708   72639 cache.go:56] Caching tarball of preloaded images
	I1014 14:58:18.106790   72639 preload.go:172] Found /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 14:58:18.106800   72639 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1014 14:58:18.106889   72639 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/config.json ...
	I1014 14:58:18.107063   72639 start.go:360] acquireMachinesLock for old-k8s-version-399767: {Name:mk0b928e3a7c606d1470b2064477a22c5e59969d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 15:01:52.179441   72639 start.go:364] duration metric: took 3m34.072351032s to acquireMachinesLock for "old-k8s-version-399767"
	I1014 15:01:52.179497   72639 start.go:96] Skipping create...Using existing machine configuration
	I1014 15:01:52.179505   72639 fix.go:54] fixHost starting: 
	I1014 15:01:52.179834   72639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:01:52.179873   72639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:01:52.196724   72639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39389
	I1014 15:01:52.197171   72639 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:01:52.197649   72639 main.go:141] libmachine: Using API Version  1
	I1014 15:01:52.197673   72639 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:01:52.198010   72639 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:01:52.198191   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:01:52.198337   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetState
	I1014 15:01:52.199789   72639 fix.go:112] recreateIfNeeded on old-k8s-version-399767: state=Stopped err=<nil>
	I1014 15:01:52.199826   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	W1014 15:01:52.199998   72639 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 15:01:52.202220   72639 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-399767" ...
	I1014 15:01:52.203601   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .Start
	I1014 15:01:52.203771   72639 main.go:141] libmachine: (old-k8s-version-399767) Ensuring networks are active...
	I1014 15:01:52.204575   72639 main.go:141] libmachine: (old-k8s-version-399767) Ensuring network default is active
	I1014 15:01:52.204971   72639 main.go:141] libmachine: (old-k8s-version-399767) Ensuring network mk-old-k8s-version-399767 is active
	I1014 15:01:52.205326   72639 main.go:141] libmachine: (old-k8s-version-399767) Getting domain xml...
	I1014 15:01:52.206026   72639 main.go:141] libmachine: (old-k8s-version-399767) Creating domain...
	I1014 15:01:53.506315   72639 main.go:141] libmachine: (old-k8s-version-399767) Waiting to get IP...
	I1014 15:01:53.507576   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:53.508228   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:53.508297   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:53.508202   73581 retry.go:31] will retry after 220.59125ms: waiting for machine to come up
	I1014 15:01:53.730853   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:53.731286   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:53.731339   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:53.731257   73581 retry.go:31] will retry after 321.559387ms: waiting for machine to come up
	I1014 15:01:54.054891   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:54.055482   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:54.055509   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:54.055443   73581 retry.go:31] will retry after 444.912998ms: waiting for machine to come up
	I1014 15:01:54.502125   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:54.502479   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:54.502525   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:54.502462   73581 retry.go:31] will retry after 600.214254ms: waiting for machine to come up
	I1014 15:01:55.104962   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:55.105479   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:55.105504   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:55.105425   73581 retry.go:31] will retry after 686.77698ms: waiting for machine to come up
	I1014 15:01:55.794125   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:55.794825   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:55.794871   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:55.794717   73581 retry.go:31] will retry after 926.146146ms: waiting for machine to come up
	I1014 15:01:56.722712   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:56.723153   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:56.723183   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:56.723112   73581 retry.go:31] will retry after 1.108272037s: waiting for machine to come up
	I1014 15:01:57.832729   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:57.833304   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:57.833356   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:57.833279   73581 retry.go:31] will retry after 1.442737664s: waiting for machine to come up
	I1014 15:01:59.278031   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:59.278558   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:59.278586   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:59.278519   73581 retry.go:31] will retry after 1.187069828s: waiting for machine to come up
	I1014 15:02:00.467810   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:00.468237   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:02:00.468267   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:02:00.468195   73581 retry.go:31] will retry after 1.667312665s: waiting for machine to come up
	I1014 15:02:02.137067   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:02.137569   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:02:02.137590   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:02:02.137530   73581 retry.go:31] will retry after 1.910892221s: waiting for machine to come up
	I1014 15:02:04.050521   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:04.051060   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:02:04.051099   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:02:04.051015   73581 retry.go:31] will retry after 2.29433775s: waiting for machine to come up
	I1014 15:02:06.347519   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:06.347985   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:02:06.348004   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:02:06.347945   73581 retry.go:31] will retry after 3.499922823s: waiting for machine to come up
	I1014 15:02:09.851017   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.851562   72639 main.go:141] libmachine: (old-k8s-version-399767) Found IP for machine: 192.168.72.138
	I1014 15:02:09.851582   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has current primary IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.851587   72639 main.go:141] libmachine: (old-k8s-version-399767) Reserving static IP address...
	I1014 15:02:09.851961   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "old-k8s-version-399767", mac: "52:54:00:87:01:70", ip: "192.168.72.138"} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:09.851991   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | skip adding static IP to network mk-old-k8s-version-399767 - found existing host DHCP lease matching {name: "old-k8s-version-399767", mac: "52:54:00:87:01:70", ip: "192.168.72.138"}
	I1014 15:02:09.852009   72639 main.go:141] libmachine: (old-k8s-version-399767) Reserved static IP address: 192.168.72.138
	I1014 15:02:09.852021   72639 main.go:141] libmachine: (old-k8s-version-399767) Waiting for SSH to be available...
	I1014 15:02:09.852031   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | Getting to WaitForSSH function...
	I1014 15:02:09.854039   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.854351   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:09.854378   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.854493   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | Using SSH client type: external
	I1014 15:02:09.854517   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa (-rw-------)
	I1014 15:02:09.854547   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.138 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 15:02:09.854559   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | About to run SSH command:
	I1014 15:02:09.854572   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | exit 0
	I1014 15:02:09.979174   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | SSH cmd err, output: <nil>: 
	I1014 15:02:09.979594   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetConfigRaw
	I1014 15:02:09.980252   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetIP
	I1014 15:02:09.983038   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.983469   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:09.983502   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.983891   72639 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/config.json ...
	I1014 15:02:09.984191   72639 machine.go:93] provisionDockerMachine start ...
	I1014 15:02:09.984220   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:09.984487   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:09.986947   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.987361   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:09.987389   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.987514   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:09.987682   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:09.987830   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:09.987924   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:09.988076   72639 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:09.988338   72639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1014 15:02:09.988352   72639 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 15:02:10.098944   72639 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 15:02:10.098968   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetMachineName
	I1014 15:02:10.099242   72639 buildroot.go:166] provisioning hostname "old-k8s-version-399767"
	I1014 15:02:10.099268   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetMachineName
	I1014 15:02:10.099437   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:10.101961   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.102298   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.102320   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.102468   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:10.102670   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.102846   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.102980   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:10.103124   72639 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:10.103337   72639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1014 15:02:10.103353   72639 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-399767 && echo "old-k8s-version-399767" | sudo tee /etc/hostname
	I1014 15:02:10.226037   72639 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-399767
	
	I1014 15:02:10.226069   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:10.228712   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.229059   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.229082   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.229228   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:10.229408   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.229549   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.229670   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:10.229804   72639 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:10.230001   72639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1014 15:02:10.230018   72639 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-399767' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-399767/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-399767' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 15:02:10.344175   72639 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 15:02:10.344206   72639 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 15:02:10.344270   72639 buildroot.go:174] setting up certificates
	I1014 15:02:10.344284   72639 provision.go:84] configureAuth start
	I1014 15:02:10.344302   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetMachineName
	I1014 15:02:10.344632   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetIP
	I1014 15:02:10.347200   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.347587   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.347623   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.347812   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:10.349962   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.350332   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.350364   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.350502   72639 provision.go:143] copyHostCerts
	I1014 15:02:10.350558   72639 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 15:02:10.350574   72639 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 15:02:10.350646   72639 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 15:02:10.350734   72639 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 15:02:10.350742   72639 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 15:02:10.350762   72639 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 15:02:10.350812   72639 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 15:02:10.350819   72639 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 15:02:10.350837   72639 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 15:02:10.350887   72639 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-399767 san=[127.0.0.1 192.168.72.138 localhost minikube old-k8s-version-399767]
	I1014 15:02:10.602118   72639 provision.go:177] copyRemoteCerts
	I1014 15:02:10.602175   72639 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 15:02:10.602199   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:10.604519   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.604744   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.604776   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.604946   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:10.605127   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.605273   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:10.605403   72639 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa Username:docker}
	I1014 15:02:10.689081   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 15:02:10.713512   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1014 15:02:10.738086   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 15:02:10.762274   72639 provision.go:87] duration metric: took 417.977128ms to configureAuth
	I1014 15:02:10.762307   72639 buildroot.go:189] setting minikube options for container-runtime
	I1014 15:02:10.762486   72639 config.go:182] Loaded profile config "old-k8s-version-399767": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1014 15:02:10.762552   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:10.765134   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.765442   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.765469   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.765600   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:10.765756   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.765903   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.765998   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:10.766131   72639 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:10.766297   72639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1014 15:02:10.766311   72639 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 15:02:11.011252   72639 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 15:02:11.011279   72639 machine.go:96] duration metric: took 1.027069423s to provisionDockerMachine
	I1014 15:02:11.011292   72639 start.go:293] postStartSetup for "old-k8s-version-399767" (driver="kvm2")
	I1014 15:02:11.011304   72639 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 15:02:11.011349   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:11.011716   72639 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 15:02:11.011751   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:11.014418   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.014754   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:11.014790   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.014946   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:11.015125   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:11.015260   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:11.015376   72639 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa Username:docker}
	I1014 15:02:11.097883   72639 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 15:02:11.102452   72639 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 15:02:11.102481   72639 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 15:02:11.102551   72639 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 15:02:11.102687   72639 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 15:02:11.102781   72639 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 15:02:11.112774   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:02:11.138211   72639 start.go:296] duration metric: took 126.906035ms for postStartSetup
	I1014 15:02:11.138247   72639 fix.go:56] duration metric: took 18.958741429s for fixHost
	I1014 15:02:11.138270   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:11.140740   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.141100   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:11.141139   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.141280   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:11.141484   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:11.141668   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:11.141811   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:11.141974   72639 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:11.142131   72639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1014 15:02:11.142141   72639 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 15:02:11.248330   72639 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728918131.224010283
	
	I1014 15:02:11.248355   72639 fix.go:216] guest clock: 1728918131.224010283
	I1014 15:02:11.248373   72639 fix.go:229] Guest: 2024-10-14 15:02:11.224010283 +0000 UTC Remote: 2024-10-14 15:02:11.138252894 +0000 UTC m=+233.173555624 (delta=85.757389ms)
	I1014 15:02:11.248399   72639 fix.go:200] guest clock delta is within tolerance: 85.757389ms
	I1014 15:02:11.248406   72639 start.go:83] releasing machines lock for "old-k8s-version-399767", held for 19.068928968s
	I1014 15:02:11.248434   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:11.248692   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetIP
	I1014 15:02:11.251774   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.252134   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:11.252176   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.252358   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:11.252840   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:11.253017   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:11.253104   72639 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 15:02:11.253150   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:11.253232   72639 ssh_runner.go:195] Run: cat /version.json
	I1014 15:02:11.253259   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:11.256105   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.256339   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.256504   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:11.256529   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.256662   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:11.256732   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:11.256771   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.256844   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:11.256932   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:11.257003   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:11.257141   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:11.257131   72639 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa Username:docker}
	I1014 15:02:11.257296   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:11.257414   72639 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa Username:docker}
	I1014 15:02:11.363838   72639 ssh_runner.go:195] Run: systemctl --version
	I1014 15:02:11.370414   72639 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 15:02:11.521232   72639 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 15:02:11.527623   72639 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 15:02:11.527712   72639 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 15:02:11.544532   72639 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 15:02:11.544559   72639 start.go:495] detecting cgroup driver to use...
	I1014 15:02:11.544614   72639 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 15:02:11.561693   72639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 15:02:11.576555   72639 docker.go:217] disabling cri-docker service (if available) ...
	I1014 15:02:11.576622   72639 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 15:02:11.593830   72639 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 15:02:11.608785   72639 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 15:02:11.731034   72639 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 15:02:11.909278   72639 docker.go:233] disabling docker service ...
	I1014 15:02:11.909359   72639 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 15:02:11.931218   72639 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 15:02:11.951710   72639 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 15:02:12.103012   72639 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 15:02:12.252290   72639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 15:02:12.270497   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 15:02:12.293240   72639 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1014 15:02:12.293297   72639 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:12.304881   72639 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 15:02:12.304958   72639 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:12.316294   72639 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:12.328591   72639 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:12.340085   72639 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 15:02:12.351765   72639 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 15:02:12.362454   72639 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 15:02:12.362525   72639 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 15:02:12.376865   72639 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 15:02:12.387779   72639 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:02:12.528541   72639 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 15:02:12.635262   72639 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 15:02:12.635335   72639 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 15:02:12.641070   72639 start.go:563] Will wait 60s for crictl version
	I1014 15:02:12.641121   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:12.645111   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 15:02:12.691103   72639 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 15:02:12.691199   72639 ssh_runner.go:195] Run: crio --version
	I1014 15:02:12.720182   72639 ssh_runner.go:195] Run: crio --version
	I1014 15:02:12.754856   72639 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1014 15:02:12.756005   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetIP
	I1014 15:02:12.759369   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:12.759890   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:12.759924   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:12.760164   72639 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1014 15:02:12.765342   72639 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:02:12.782182   72639 kubeadm.go:883] updating cluster {Name:old-k8s-version-399767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-399767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.138 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 15:02:12.782307   72639 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1014 15:02:12.782374   72639 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:02:12.841797   72639 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1014 15:02:12.841871   72639 ssh_runner.go:195] Run: which lz4
	I1014 15:02:12.846193   72639 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 15:02:12.850982   72639 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 15:02:12.851019   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1014 15:02:14.579304   72639 crio.go:462] duration metric: took 1.733147869s to copy over tarball
	I1014 15:02:14.579405   72639 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 15:02:17.644891   72639 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.06545265s)
	I1014 15:02:17.644954   72639 crio.go:469] duration metric: took 3.065620277s to extract the tarball
	I1014 15:02:17.644979   72639 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 15:02:17.688304   72639 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:02:17.727862   72639 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1014 15:02:17.727888   72639 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1014 15:02:17.727984   72639 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:17.727995   72639 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:17.728006   72639 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:17.728036   72639 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:17.727986   72639 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:17.728104   72639 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1014 15:02:17.728169   72639 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1014 15:02:17.728267   72639 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:17.729900   72639 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:17.729941   72639 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:17.729954   72639 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:17.729900   72639 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1014 15:02:17.729984   72639 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:17.729999   72639 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1014 15:02:17.729913   72639 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:17.730335   72639 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:17.889181   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:17.912728   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:17.919124   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:17.920117   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:17.934314   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1014 15:02:17.951143   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:17.956588   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1014 15:02:17.964968   72639 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1014 15:02:17.965031   72639 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:17.965066   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.041388   72639 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1014 15:02:18.041436   72639 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:18.041489   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.041504   72639 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1014 15:02:18.041540   72639 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:18.041579   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.069534   72639 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1014 15:02:18.069582   72639 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1014 15:02:18.069631   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.069794   72639 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1014 15:02:18.069821   72639 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:18.069852   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.096492   72639 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1014 15:02:18.096536   72639 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:18.096575   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.104764   72639 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1014 15:02:18.104810   72639 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1014 15:02:18.104816   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:18.104854   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.104876   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:18.104885   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:18.104980   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:18.104984   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1014 15:02:18.105025   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:18.119784   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1014 15:02:18.213816   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:18.241644   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:18.288717   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:18.288820   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1014 15:02:18.288931   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:18.289005   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:18.295481   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1014 15:02:18.376936   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:18.393755   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:18.449717   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:18.449798   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:18.449824   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1014 15:02:18.449904   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:18.461905   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1014 15:02:18.508804   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1014 15:02:18.521502   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1014 15:02:18.612103   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1014 15:02:18.613450   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1014 15:02:18.613548   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1014 15:02:18.613625   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1014 15:02:18.613715   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1014 15:02:18.741774   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:18.888495   72639 cache_images.go:92] duration metric: took 1.16058525s to LoadCachedImages
	W1014 15:02:18.888578   72639 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I1014 15:02:18.888594   72639 kubeadm.go:934] updating node { 192.168.72.138 8443 v1.20.0 crio true true} ...
	I1014 15:02:18.888707   72639 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-399767 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.138
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-399767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 15:02:18.888791   72639 ssh_runner.go:195] Run: crio config
	I1014 15:02:18.943058   72639 cni.go:84] Creating CNI manager for ""
	I1014 15:02:18.943082   72639 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:02:18.943091   72639 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 15:02:18.943108   72639 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.138 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-399767 NodeName:old-k8s-version-399767 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.138"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.138 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1014 15:02:18.943225   72639 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.138
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-399767"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.138
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.138"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 15:02:18.943285   72639 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1014 15:02:18.956635   72639 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 15:02:18.956727   72639 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 15:02:18.970846   72639 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1014 15:02:18.992163   72639 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 15:02:19.012061   72639 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1014 15:02:19.033158   72639 ssh_runner.go:195] Run: grep 192.168.72.138	control-plane.minikube.internal$ /etc/hosts
	I1014 15:02:19.037195   72639 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.138	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:02:19.051127   72639 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:02:19.172992   72639 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:02:19.190545   72639 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767 for IP: 192.168.72.138
	I1014 15:02:19.190572   72639 certs.go:194] generating shared ca certs ...
	I1014 15:02:19.190592   72639 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:02:19.190786   72639 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 15:02:19.190843   72639 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 15:02:19.190853   72639 certs.go:256] generating profile certs ...
	I1014 15:02:19.190973   72639 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/client.key
	I1014 15:02:19.191053   72639 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/apiserver.key.c5ef93ea
	I1014 15:02:19.191108   72639 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/proxy-client.key
	I1014 15:02:19.191264   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 15:02:19.191302   72639 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 15:02:19.191314   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 15:02:19.191345   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 15:02:19.191374   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 15:02:19.191423   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 15:02:19.191477   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:02:19.192328   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 15:02:19.248981   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 15:02:19.281262   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 15:02:19.312859   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 15:02:19.351940   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1014 15:02:19.405710   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 15:02:19.441313   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 15:02:19.481774   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 15:02:19.509433   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 15:02:19.537994   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 15:02:19.564460   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 15:02:19.593632   72639 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 15:02:19.614775   72639 ssh_runner.go:195] Run: openssl version
	I1014 15:02:19.623548   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 15:02:19.636680   72639 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:02:19.642225   72639 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:02:19.642286   72639 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:02:19.648609   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 15:02:19.661130   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 15:02:19.672988   72639 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 15:02:19.678119   72639 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 15:02:19.678189   72639 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 15:02:19.684583   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 15:02:19.696685   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 15:02:19.708338   72639 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 15:02:19.713443   72639 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 15:02:19.713502   72639 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 15:02:19.719482   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 15:02:19.731720   72639 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 15:02:19.739006   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 15:02:19.747558   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 15:02:19.756399   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 15:02:19.764987   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 15:02:19.773320   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 15:02:19.781239   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 15:02:19.788638   72639 kubeadm.go:392] StartCluster: {Name:old-k8s-version-399767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-399767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.138 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 15:02:19.788753   72639 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 15:02:19.788810   72639 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:02:19.829586   72639 cri.go:89] found id: ""
	I1014 15:02:19.829641   72639 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 15:02:19.844632   72639 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1014 15:02:19.844654   72639 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1014 15:02:19.844708   72639 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 15:02:19.860547   72639 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 15:02:19.861848   72639 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-399767" does not appear in /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 15:02:19.862755   72639 kubeconfig.go:62] /home/jenkins/minikube-integration/19790-7836/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-399767" cluster setting kubeconfig missing "old-k8s-version-399767" context setting]
	I1014 15:02:19.863757   72639 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/kubeconfig: {Name:mk17dd47b52fd7a8ee17f563c35b22ef1b7788f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:02:19.927447   72639 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 15:02:19.940830   72639 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.138
	I1014 15:02:19.940919   72639 kubeadm.go:1160] stopping kube-system containers ...
	I1014 15:02:19.940947   72639 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1014 15:02:19.941009   72639 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:02:19.983689   72639 cri.go:89] found id: ""
	I1014 15:02:19.983769   72639 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1014 15:02:20.007079   72639 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:02:20.023868   72639 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:02:20.023896   72639 kubeadm.go:157] found existing configuration files:
	
	I1014 15:02:20.023971   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:02:20.038661   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:02:20.038734   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:02:20.054357   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:02:20.068771   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:02:20.068843   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:02:20.081157   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:02:20.095416   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:02:20.095483   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:02:20.109099   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:02:20.120608   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:02:20.120680   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:02:20.133217   72639 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:02:20.145896   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:20.311840   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:21.472918   72639 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.161037865s)
	I1014 15:02:21.472953   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:21.739827   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:21.833423   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:21.931874   72639 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:02:21.931987   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:22.432595   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:22.932784   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:23.432728   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:23.932296   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:24.432079   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:24.932064   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:25.432201   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:25.932119   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:26.432423   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:26.932675   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:27.432633   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:27.932380   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:28.432518   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:28.932871   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:29.432350   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:29.932761   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:30.432621   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:30.932873   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:31.432716   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:31.932364   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:32.432747   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:32.933039   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:33.432474   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:33.932719   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:34.432581   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:34.932863   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:35.432886   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:35.932915   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:36.432852   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:36.932367   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:37.432894   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:37.933035   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:38.432551   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:38.932486   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:39.432591   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:39.932694   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:40.432065   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:40.932044   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:41.432313   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:41.933055   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:42.432453   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:42.932258   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:43.432054   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:43.932139   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:44.432261   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:44.932517   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:45.432959   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:45.933103   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:46.432845   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:46.932825   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:47.432059   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:47.932745   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:48.432869   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:48.932514   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:49.432754   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:49.932514   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:50.432199   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:50.932861   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:51.432404   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:51.932097   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:52.432569   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:52.933078   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:53.432335   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:53.932860   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:54.433105   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:54.933031   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:55.432058   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:55.932422   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:56.432618   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:56.932727   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:57.432265   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:57.932733   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:58.432774   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:58.932666   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:59.433020   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:59.932671   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:00.432717   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:00.932917   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:01.432735   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:01.932668   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:02.432260   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:02.932075   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:03.432139   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:03.932241   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:04.432421   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:04.932869   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:05.432972   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:05.933010   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:06.432409   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:06.932778   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:07.432067   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:07.932749   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:08.432529   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:08.932034   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:09.432042   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:09.933054   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:10.432938   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:10.932661   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:11.432392   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:11.932068   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:12.432066   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:12.932122   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:13.432556   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:13.932427   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:14.432053   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:14.932460   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:15.432714   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:15.933071   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:16.432567   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:16.932414   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:17.432985   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:17.932960   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:18.433026   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:18.932015   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:19.432042   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:19.932030   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:20.433050   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:20.932658   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:21.432667   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:21.933045   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:21.933127   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:21.973476   72639 cri.go:89] found id: ""
	I1014 15:03:21.973507   72639 logs.go:282] 0 containers: []
	W1014 15:03:21.973517   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:21.973523   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:21.973584   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:22.011700   72639 cri.go:89] found id: ""
	I1014 15:03:22.011732   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.011742   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:22.011748   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:22.011814   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:22.047721   72639 cri.go:89] found id: ""
	I1014 15:03:22.047744   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.047752   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:22.047762   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:22.047814   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:22.091618   72639 cri.go:89] found id: ""
	I1014 15:03:22.091644   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.091652   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:22.091657   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:22.091706   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:22.129997   72639 cri.go:89] found id: ""
	I1014 15:03:22.130036   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.130047   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:22.130055   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:22.130114   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:22.168024   72639 cri.go:89] found id: ""
	I1014 15:03:22.168053   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.168061   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:22.168067   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:22.168136   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:22.202633   72639 cri.go:89] found id: ""
	I1014 15:03:22.202660   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.202670   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:22.202677   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:22.202739   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:22.238224   72639 cri.go:89] found id: ""
	I1014 15:03:22.238251   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.238259   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:22.238267   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:22.238278   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:22.251940   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:22.251991   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:22.379777   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:22.379799   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:22.379814   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:22.456468   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:22.456507   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:22.495404   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:22.495433   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:25.048061   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:25.068586   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:25.068658   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:25.121199   72639 cri.go:89] found id: ""
	I1014 15:03:25.121228   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.121237   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:25.121243   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:25.121303   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:25.174705   72639 cri.go:89] found id: ""
	I1014 15:03:25.174738   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.174749   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:25.174757   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:25.174815   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:25.236972   72639 cri.go:89] found id: ""
	I1014 15:03:25.237002   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.237013   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:25.237020   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:25.237077   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:25.276443   72639 cri.go:89] found id: ""
	I1014 15:03:25.276473   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.276483   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:25.276489   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:25.276541   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:25.314573   72639 cri.go:89] found id: ""
	I1014 15:03:25.314623   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.314636   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:25.314645   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:25.314708   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:25.357489   72639 cri.go:89] found id: ""
	I1014 15:03:25.357515   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.357525   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:25.357533   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:25.357595   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:25.397504   72639 cri.go:89] found id: ""
	I1014 15:03:25.397527   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.397538   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:25.397546   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:25.397597   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:25.433139   72639 cri.go:89] found id: ""
	I1014 15:03:25.433162   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.433170   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:25.433179   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:25.433193   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:25.448088   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:25.448121   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:25.522377   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:25.522401   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:25.522415   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:25.595505   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:25.595538   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:25.643478   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:25.643511   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:28.195236   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:28.208612   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:28.208686   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:28.248538   72639 cri.go:89] found id: ""
	I1014 15:03:28.248569   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.248581   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:28.248588   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:28.248652   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:28.286103   72639 cri.go:89] found id: ""
	I1014 15:03:28.286131   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.286143   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:28.286149   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:28.286209   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:28.321335   72639 cri.go:89] found id: ""
	I1014 15:03:28.321371   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.321383   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:28.321391   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:28.321453   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:28.358538   72639 cri.go:89] found id: ""
	I1014 15:03:28.358571   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.358581   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:28.358588   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:28.358661   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:28.397058   72639 cri.go:89] found id: ""
	I1014 15:03:28.397087   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.397099   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:28.397106   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:28.397175   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:28.434010   72639 cri.go:89] found id: ""
	I1014 15:03:28.434032   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.434040   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:28.434045   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:28.434095   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:28.474646   72639 cri.go:89] found id: ""
	I1014 15:03:28.474672   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.474681   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:28.474687   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:28.474736   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:28.512833   72639 cri.go:89] found id: ""
	I1014 15:03:28.512860   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.512871   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:28.512882   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:28.512894   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:28.526233   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:28.526262   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:28.601366   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:28.601393   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:28.601416   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:28.690261   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:28.690300   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:28.734134   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:28.734158   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:31.290184   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:31.303493   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:31.303558   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:31.341521   72639 cri.go:89] found id: ""
	I1014 15:03:31.341552   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.341563   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:31.341569   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:31.341627   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:31.378811   72639 cri.go:89] found id: ""
	I1014 15:03:31.378839   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.378851   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:31.378859   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:31.378922   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:31.416282   72639 cri.go:89] found id: ""
	I1014 15:03:31.416310   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.416321   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:31.416328   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:31.416392   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:31.456089   72639 cri.go:89] found id: ""
	I1014 15:03:31.456123   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.456134   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:31.456142   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:31.456202   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:31.496429   72639 cri.go:89] found id: ""
	I1014 15:03:31.496468   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.496478   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:31.496485   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:31.496548   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:31.535226   72639 cri.go:89] found id: ""
	I1014 15:03:31.535248   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.535256   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:31.535262   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:31.535321   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:31.572580   72639 cri.go:89] found id: ""
	I1014 15:03:31.572608   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.572623   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:31.572631   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:31.572691   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:31.606736   72639 cri.go:89] found id: ""
	I1014 15:03:31.606759   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.606766   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:31.606774   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:31.606785   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:31.646048   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:31.646078   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:31.696818   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:31.696851   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:31.710099   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:31.710128   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:31.787756   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:31.787783   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:31.787798   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:34.369392   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:34.383263   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:34.383344   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:34.417763   72639 cri.go:89] found id: ""
	I1014 15:03:34.417797   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.417809   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:34.417816   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:34.417890   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:34.453361   72639 cri.go:89] found id: ""
	I1014 15:03:34.453391   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.453402   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:34.453409   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:34.453488   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:34.490878   72639 cri.go:89] found id: ""
	I1014 15:03:34.490905   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.490913   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:34.490919   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:34.490980   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:34.527554   72639 cri.go:89] found id: ""
	I1014 15:03:34.527584   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.527595   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:34.527603   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:34.527655   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:34.564813   72639 cri.go:89] found id: ""
	I1014 15:03:34.564841   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.564851   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:34.564857   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:34.564903   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:34.599899   72639 cri.go:89] found id: ""
	I1014 15:03:34.599930   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.599942   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:34.599949   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:34.600019   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:34.641686   72639 cri.go:89] found id: ""
	I1014 15:03:34.641717   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.641728   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:34.641735   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:34.641794   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:34.681154   72639 cri.go:89] found id: ""
	I1014 15:03:34.681184   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.681195   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:34.681205   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:34.681218   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:34.719638   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:34.719672   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:34.771687   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:34.771722   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:34.785943   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:34.785972   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:34.861821   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:34.861861   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:34.861875   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:37.441605   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:37.456763   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:37.456828   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:37.494176   72639 cri.go:89] found id: ""
	I1014 15:03:37.494202   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.494210   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:37.494216   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:37.494268   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:37.538802   72639 cri.go:89] found id: ""
	I1014 15:03:37.538834   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.538846   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:37.538853   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:37.538913   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:37.586282   72639 cri.go:89] found id: ""
	I1014 15:03:37.586312   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.586322   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:37.586328   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:37.586397   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:37.632673   72639 cri.go:89] found id: ""
	I1014 15:03:37.632698   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.632709   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:37.632715   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:37.632771   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:37.673340   72639 cri.go:89] found id: ""
	I1014 15:03:37.673364   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.673372   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:37.673377   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:37.673427   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:37.718725   72639 cri.go:89] found id: ""
	I1014 15:03:37.718750   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.718758   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:37.718764   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:37.718807   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:37.760560   72639 cri.go:89] found id: ""
	I1014 15:03:37.760587   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.760597   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:37.760605   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:37.760665   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:37.800912   72639 cri.go:89] found id: ""
	I1014 15:03:37.800941   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.800949   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:37.800957   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:37.800968   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:37.815338   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:37.815363   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:37.893018   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:37.893050   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:37.893067   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:37.978315   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:37.978349   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:38.019760   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:38.019788   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:40.570918   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:40.586058   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:40.586122   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:40.623753   72639 cri.go:89] found id: ""
	I1014 15:03:40.623784   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.623795   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:40.623801   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:40.623862   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:40.663909   72639 cri.go:89] found id: ""
	I1014 15:03:40.663937   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.663946   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:40.663953   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:40.664008   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:40.698572   72639 cri.go:89] found id: ""
	I1014 15:03:40.698615   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.698626   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:40.698633   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:40.698683   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:40.734882   72639 cri.go:89] found id: ""
	I1014 15:03:40.734907   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.734914   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:40.734920   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:40.734976   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:40.768429   72639 cri.go:89] found id: ""
	I1014 15:03:40.768455   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.768462   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:40.768468   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:40.768527   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:40.803429   72639 cri.go:89] found id: ""
	I1014 15:03:40.803456   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.803466   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:40.803474   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:40.803535   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:40.842854   72639 cri.go:89] found id: ""
	I1014 15:03:40.842883   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.842905   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:40.842913   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:40.842988   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:40.879638   72639 cri.go:89] found id: ""
	I1014 15:03:40.879661   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.879669   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:40.879677   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:40.879687   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:40.924949   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:40.924983   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:40.976271   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:40.976304   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:40.991492   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:40.991520   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:41.071418   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:41.071439   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:41.071453   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:43.652387   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:43.666239   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:43.666317   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:43.705726   72639 cri.go:89] found id: ""
	I1014 15:03:43.705752   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.705761   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:43.705766   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:43.705814   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:43.745648   72639 cri.go:89] found id: ""
	I1014 15:03:43.745672   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.745680   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:43.745685   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:43.745731   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:43.783032   72639 cri.go:89] found id: ""
	I1014 15:03:43.783055   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.783063   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:43.783068   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:43.783115   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:43.820582   72639 cri.go:89] found id: ""
	I1014 15:03:43.820607   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.820617   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:43.820623   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:43.820669   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:43.862312   72639 cri.go:89] found id: ""
	I1014 15:03:43.862338   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.862348   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:43.862353   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:43.862404   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:43.898338   72639 cri.go:89] found id: ""
	I1014 15:03:43.898368   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.898379   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:43.898388   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:43.898448   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:43.934682   72639 cri.go:89] found id: ""
	I1014 15:03:43.934709   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.934719   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:43.934726   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:43.934781   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:43.970209   72639 cri.go:89] found id: ""
	I1014 15:03:43.970237   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.970247   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:43.970257   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:43.970269   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:44.024791   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:44.024832   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:44.038431   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:44.038457   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:44.117255   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:44.117291   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:44.117308   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:44.199397   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:44.199436   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:46.739819   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:46.755553   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:46.755625   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:46.797225   72639 cri.go:89] found id: ""
	I1014 15:03:46.797253   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.797265   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:46.797272   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:46.797335   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:46.832999   72639 cri.go:89] found id: ""
	I1014 15:03:46.833025   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.833036   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:46.833043   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:46.833103   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:46.872711   72639 cri.go:89] found id: ""
	I1014 15:03:46.872733   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.872741   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:46.872746   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:46.872795   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:46.909945   72639 cri.go:89] found id: ""
	I1014 15:03:46.909968   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.909977   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:46.909985   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:46.910046   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:46.946036   72639 cri.go:89] found id: ""
	I1014 15:03:46.946067   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.946080   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:46.946087   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:46.946141   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:46.981772   72639 cri.go:89] found id: ""
	I1014 15:03:46.981806   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.981819   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:46.981828   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:46.981896   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:47.022761   72639 cri.go:89] found id: ""
	I1014 15:03:47.022790   72639 logs.go:282] 0 containers: []
	W1014 15:03:47.022800   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:47.022807   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:47.022869   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:47.057368   72639 cri.go:89] found id: ""
	I1014 15:03:47.057392   72639 logs.go:282] 0 containers: []
	W1014 15:03:47.057400   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:47.057408   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:47.057418   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:47.134369   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:47.134408   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:47.179550   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:47.179586   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:47.233317   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:47.233355   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:47.247598   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:47.247629   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:47.321309   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:49.821955   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:49.836907   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:49.836975   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:49.876651   72639 cri.go:89] found id: ""
	I1014 15:03:49.876682   72639 logs.go:282] 0 containers: []
	W1014 15:03:49.876694   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:49.876713   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:49.876781   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:49.913440   72639 cri.go:89] found id: ""
	I1014 15:03:49.913464   72639 logs.go:282] 0 containers: []
	W1014 15:03:49.913473   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:49.913479   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:49.913535   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:49.949352   72639 cri.go:89] found id: ""
	I1014 15:03:49.949383   72639 logs.go:282] 0 containers: []
	W1014 15:03:49.949395   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:49.949402   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:49.949463   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:49.984599   72639 cri.go:89] found id: ""
	I1014 15:03:49.984629   72639 logs.go:282] 0 containers: []
	W1014 15:03:49.984641   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:49.984649   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:49.984709   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:50.028049   72639 cri.go:89] found id: ""
	I1014 15:03:50.028072   72639 logs.go:282] 0 containers: []
	W1014 15:03:50.028083   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:50.028090   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:50.028166   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:50.062272   72639 cri.go:89] found id: ""
	I1014 15:03:50.062294   72639 logs.go:282] 0 containers: []
	W1014 15:03:50.062302   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:50.062308   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:50.062358   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:50.099722   72639 cri.go:89] found id: ""
	I1014 15:03:50.099750   72639 logs.go:282] 0 containers: []
	W1014 15:03:50.099762   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:50.099769   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:50.099830   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:50.139984   72639 cri.go:89] found id: ""
	I1014 15:03:50.140005   72639 logs.go:282] 0 containers: []
	W1014 15:03:50.140013   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:50.140020   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:50.140032   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:50.218467   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:50.218500   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:50.260600   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:50.260635   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:50.313725   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:50.313757   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:50.328431   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:50.328462   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:50.401334   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:52.901787   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:52.917836   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:52.917902   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:52.955387   72639 cri.go:89] found id: ""
	I1014 15:03:52.955418   72639 logs.go:282] 0 containers: []
	W1014 15:03:52.955431   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:52.955440   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:52.955504   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:52.990890   72639 cri.go:89] found id: ""
	I1014 15:03:52.990924   72639 logs.go:282] 0 containers: []
	W1014 15:03:52.990936   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:52.990945   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:52.991004   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:53.032344   72639 cri.go:89] found id: ""
	I1014 15:03:53.032374   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.032384   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:53.032390   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:53.032458   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:53.073501   72639 cri.go:89] found id: ""
	I1014 15:03:53.073527   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.073537   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:53.073544   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:53.073602   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:53.114273   72639 cri.go:89] found id: ""
	I1014 15:03:53.114307   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.114316   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:53.114334   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:53.114389   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:53.155448   72639 cri.go:89] found id: ""
	I1014 15:03:53.155475   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.155484   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:53.155490   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:53.155539   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:53.191304   72639 cri.go:89] found id: ""
	I1014 15:03:53.191338   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.191350   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:53.191357   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:53.191438   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:53.224664   72639 cri.go:89] found id: ""
	I1014 15:03:53.224691   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.224702   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:53.224727   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:53.224744   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:53.275751   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:53.275786   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:53.289275   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:53.289303   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:53.369828   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:53.369855   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:53.369871   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:53.457248   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:53.457285   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:56.003384   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:56.017722   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:56.017782   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:56.056644   72639 cri.go:89] found id: ""
	I1014 15:03:56.056675   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.056686   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:56.056694   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:56.056757   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:56.094482   72639 cri.go:89] found id: ""
	I1014 15:03:56.094507   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.094517   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:56.094524   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:56.094583   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:56.129884   72639 cri.go:89] found id: ""
	I1014 15:03:56.129913   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.129921   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:56.129926   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:56.129974   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:56.167171   72639 cri.go:89] found id: ""
	I1014 15:03:56.167198   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.167206   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:56.167211   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:56.167264   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:56.204400   72639 cri.go:89] found id: ""
	I1014 15:03:56.204433   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.204442   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:56.204447   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:56.204494   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:56.240407   72639 cri.go:89] found id: ""
	I1014 15:03:56.240437   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.240448   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:56.240456   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:56.240517   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:56.277653   72639 cri.go:89] found id: ""
	I1014 15:03:56.277679   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.277687   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:56.277693   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:56.277738   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:56.313423   72639 cri.go:89] found id: ""
	I1014 15:03:56.313451   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.313459   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:56.313468   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:56.313480   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:56.368094   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:56.368133   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:56.382563   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:56.382621   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:56.455106   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:56.455130   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:56.455144   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:56.532288   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:56.532329   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:59.072469   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:59.089024   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:59.089094   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:59.130798   72639 cri.go:89] found id: ""
	I1014 15:03:59.130829   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.130840   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:59.130848   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:59.130908   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:59.167828   72639 cri.go:89] found id: ""
	I1014 15:03:59.167854   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.167864   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:59.167871   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:59.167932   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:59.223482   72639 cri.go:89] found id: ""
	I1014 15:03:59.223509   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.223520   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:59.223528   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:59.223590   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:59.261186   72639 cri.go:89] found id: ""
	I1014 15:03:59.261231   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.261243   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:59.261251   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:59.261314   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:59.296924   72639 cri.go:89] found id: ""
	I1014 15:03:59.296985   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.297000   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:59.297008   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:59.297084   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:59.333891   72639 cri.go:89] found id: ""
	I1014 15:03:59.333915   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.333923   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:59.333929   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:59.333991   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:59.374106   72639 cri.go:89] found id: ""
	I1014 15:03:59.374134   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.374143   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:59.374150   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:59.374222   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:59.412256   72639 cri.go:89] found id: ""
	I1014 15:03:59.412283   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.412291   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:59.412298   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:59.412308   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:59.492869   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:59.492904   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:59.492923   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:59.576441   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:59.576473   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:59.618638   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:59.618668   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:59.671295   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:59.671331   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:02.184689   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:02.197763   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:02.197833   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:02.231709   72639 cri.go:89] found id: ""
	I1014 15:04:02.231734   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.231746   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:02.231753   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:02.231815   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:02.269259   72639 cri.go:89] found id: ""
	I1014 15:04:02.269291   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.269303   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:02.269311   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:02.269390   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:02.305926   72639 cri.go:89] found id: ""
	I1014 15:04:02.305956   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.305967   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:02.305975   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:02.306034   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:02.349516   72639 cri.go:89] found id: ""
	I1014 15:04:02.349544   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.349557   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:02.349563   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:02.349622   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:02.388334   72639 cri.go:89] found id: ""
	I1014 15:04:02.388361   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.388371   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:02.388376   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:02.388428   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:02.422742   72639 cri.go:89] found id: ""
	I1014 15:04:02.422770   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.422781   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:02.422789   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:02.422850   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:02.463686   72639 cri.go:89] found id: ""
	I1014 15:04:02.463710   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.463718   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:02.463724   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:02.463770   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:02.498352   72639 cri.go:89] found id: ""
	I1014 15:04:02.498383   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.498394   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:02.498404   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:02.498418   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:02.512531   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:02.512561   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:02.585331   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:02.585359   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:02.585373   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:02.667376   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:02.667414   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:02.708101   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:02.708133   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:05.259839   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:05.273102   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:05.273186   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:05.311745   72639 cri.go:89] found id: ""
	I1014 15:04:05.311768   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.311776   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:05.311787   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:05.311834   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:05.349313   72639 cri.go:89] found id: ""
	I1014 15:04:05.349336   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.349344   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:05.349352   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:05.349416   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:05.388003   72639 cri.go:89] found id: ""
	I1014 15:04:05.388026   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.388034   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:05.388039   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:05.388098   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:05.426636   72639 cri.go:89] found id: ""
	I1014 15:04:05.426665   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.426676   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:05.426683   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:05.426745   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:05.461945   72639 cri.go:89] found id: ""
	I1014 15:04:05.461974   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.461983   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:05.461989   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:05.462049   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:05.497099   72639 cri.go:89] found id: ""
	I1014 15:04:05.497130   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.497142   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:05.497149   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:05.497216   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:05.531621   72639 cri.go:89] found id: ""
	I1014 15:04:05.531652   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.531664   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:05.531671   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:05.531729   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:05.568950   72639 cri.go:89] found id: ""
	I1014 15:04:05.568973   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.568983   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:05.568992   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:05.569012   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:05.624806   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:05.624846   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:05.651912   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:05.651961   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:05.740342   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:05.740369   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:05.740384   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:05.817901   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:05.817932   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:08.360267   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:08.373249   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:08.373325   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:08.409485   72639 cri.go:89] found id: ""
	I1014 15:04:08.409520   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.409535   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:08.409542   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:08.409604   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:08.444977   72639 cri.go:89] found id: ""
	I1014 15:04:08.445000   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.445008   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:08.445014   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:08.445061   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:08.478080   72639 cri.go:89] found id: ""
	I1014 15:04:08.478108   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.478117   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:08.478123   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:08.478169   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:08.511510   72639 cri.go:89] found id: ""
	I1014 15:04:08.511536   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.511545   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:08.511552   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:08.511603   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:08.546260   72639 cri.go:89] found id: ""
	I1014 15:04:08.546285   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.546292   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:08.546299   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:08.546347   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:08.582775   72639 cri.go:89] found id: ""
	I1014 15:04:08.582799   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.582810   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:08.582816   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:08.582875   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:08.619208   72639 cri.go:89] found id: ""
	I1014 15:04:08.619231   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.619239   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:08.619244   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:08.619299   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:08.654823   72639 cri.go:89] found id: ""
	I1014 15:04:08.654849   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.654860   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:08.654870   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:08.654885   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:08.704543   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:08.704574   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:08.718111   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:08.718144   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:08.792267   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:08.792290   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:08.792309   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:08.870178   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:08.870210   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:11.409975   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:11.432171   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:11.432243   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:11.468997   72639 cri.go:89] found id: ""
	I1014 15:04:11.469021   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.469030   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:11.469035   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:11.469094   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:11.504312   72639 cri.go:89] found id: ""
	I1014 15:04:11.504337   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.504346   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:11.504354   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:11.504417   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:11.540628   72639 cri.go:89] found id: ""
	I1014 15:04:11.540654   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.540662   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:11.540667   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:11.540729   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:11.576466   72639 cri.go:89] found id: ""
	I1014 15:04:11.576491   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.576498   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:11.576506   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:11.576550   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:11.611466   72639 cri.go:89] found id: ""
	I1014 15:04:11.611501   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.611512   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:11.611519   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:11.611578   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:11.650089   72639 cri.go:89] found id: ""
	I1014 15:04:11.650116   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.650126   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:11.650133   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:11.650191   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:11.686538   72639 cri.go:89] found id: ""
	I1014 15:04:11.686563   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.686571   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:11.686577   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:11.686654   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:11.725494   72639 cri.go:89] found id: ""
	I1014 15:04:11.725517   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.725524   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:11.725532   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:11.725545   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:11.779062   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:11.779102   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:11.792726   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:11.792753   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:11.867945   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:11.867972   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:11.867986   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:11.952299   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:11.952340   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:14.493922   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:14.506754   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:14.506817   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:14.540456   72639 cri.go:89] found id: ""
	I1014 15:04:14.540480   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.540489   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:14.540495   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:14.540545   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:14.574819   72639 cri.go:89] found id: ""
	I1014 15:04:14.574843   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.574853   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:14.574859   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:14.574917   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:14.608834   72639 cri.go:89] found id: ""
	I1014 15:04:14.608859   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.608868   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:14.608873   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:14.608920   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:14.644182   72639 cri.go:89] found id: ""
	I1014 15:04:14.644210   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.644218   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:14.644223   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:14.644283   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:14.679113   72639 cri.go:89] found id: ""
	I1014 15:04:14.679145   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.679156   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:14.679164   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:14.679228   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:14.716111   72639 cri.go:89] found id: ""
	I1014 15:04:14.716142   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.716154   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:14.716167   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:14.716220   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:14.755884   72639 cri.go:89] found id: ""
	I1014 15:04:14.755907   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.755915   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:14.755920   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:14.755968   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:14.794167   72639 cri.go:89] found id: ""
	I1014 15:04:14.794195   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.794207   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:14.794217   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:14.794234   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:14.844828   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:14.844864   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:14.859424   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:14.859451   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:14.936660   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:14.936687   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:14.936703   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:15.017034   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:15.017070   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:17.555604   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:17.570628   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:17.570687   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:17.612919   72639 cri.go:89] found id: ""
	I1014 15:04:17.612943   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.612951   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:17.612956   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:17.613002   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:17.651178   72639 cri.go:89] found id: ""
	I1014 15:04:17.651210   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.651220   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:17.651226   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:17.651278   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:17.687923   72639 cri.go:89] found id: ""
	I1014 15:04:17.687955   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.687966   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:17.687973   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:17.688024   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:17.724759   72639 cri.go:89] found id: ""
	I1014 15:04:17.724790   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.724800   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:17.724807   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:17.724866   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:17.760189   72639 cri.go:89] found id: ""
	I1014 15:04:17.760212   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.760220   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:17.760226   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:17.760274   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:17.797517   72639 cri.go:89] found id: ""
	I1014 15:04:17.797541   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.797549   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:17.797554   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:17.797601   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:17.833238   72639 cri.go:89] found id: ""
	I1014 15:04:17.833261   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.833270   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:17.833275   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:17.833321   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:17.868828   72639 cri.go:89] found id: ""
	I1014 15:04:17.868857   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.868865   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:17.868873   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:17.868883   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:17.956972   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:17.957011   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:18.006354   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:18.006390   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:18.056237   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:18.056271   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:18.070763   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:18.070792   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:18.147471   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:20.648238   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:20.661465   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:20.661534   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:20.695869   72639 cri.go:89] found id: ""
	I1014 15:04:20.695894   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.695902   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:20.695907   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:20.695957   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:20.729271   72639 cri.go:89] found id: ""
	I1014 15:04:20.729295   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.729313   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:20.729319   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:20.729364   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:20.767110   72639 cri.go:89] found id: ""
	I1014 15:04:20.767137   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.767147   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:20.767154   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:20.767209   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:20.802752   72639 cri.go:89] found id: ""
	I1014 15:04:20.802781   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.802791   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:20.802798   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:20.802846   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:20.841958   72639 cri.go:89] found id: ""
	I1014 15:04:20.841987   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.841998   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:20.842005   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:20.842066   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:20.878869   72639 cri.go:89] found id: ""
	I1014 15:04:20.878896   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.878907   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:20.878914   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:20.878974   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:20.913802   72639 cri.go:89] found id: ""
	I1014 15:04:20.913838   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.913852   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:20.913861   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:20.913922   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:20.948350   72639 cri.go:89] found id: ""
	I1014 15:04:20.948378   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.948395   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:20.948403   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:20.948416   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:21.001065   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:21.001098   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:21.014427   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:21.014458   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:21.091386   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:21.091412   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:21.091432   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:21.175255   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:21.175299   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:23.718260   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:23.732366   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:23.732445   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:23.767269   72639 cri.go:89] found id: ""
	I1014 15:04:23.767299   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.767311   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:23.767317   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:23.767379   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:23.808502   72639 cri.go:89] found id: ""
	I1014 15:04:23.808532   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.808543   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:23.808550   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:23.808606   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:23.845632   72639 cri.go:89] found id: ""
	I1014 15:04:23.845664   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.845677   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:23.845685   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:23.845753   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:23.880218   72639 cri.go:89] found id: ""
	I1014 15:04:23.880249   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.880261   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:23.880268   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:23.880332   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:23.915674   72639 cri.go:89] found id: ""
	I1014 15:04:23.915697   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.915705   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:23.915710   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:23.915767   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:23.950526   72639 cri.go:89] found id: ""
	I1014 15:04:23.950559   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.950570   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:23.950578   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:23.950656   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:23.986130   72639 cri.go:89] found id: ""
	I1014 15:04:23.986167   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.986178   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:23.986186   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:23.986246   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:24.027112   72639 cri.go:89] found id: ""
	I1014 15:04:24.027141   72639 logs.go:282] 0 containers: []
	W1014 15:04:24.027154   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:24.027165   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:24.027181   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:24.082559   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:24.082610   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:24.096900   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:24.096929   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:24.173293   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:24.173327   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:24.173341   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:24.256921   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:24.256962   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:26.802073   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:26.817307   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:26.817366   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:26.855777   72639 cri.go:89] found id: ""
	I1014 15:04:26.855805   72639 logs.go:282] 0 containers: []
	W1014 15:04:26.855817   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:26.855825   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:26.855876   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:26.892260   72639 cri.go:89] found id: ""
	I1014 15:04:26.892288   72639 logs.go:282] 0 containers: []
	W1014 15:04:26.892300   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:26.892308   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:26.892369   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:26.931066   72639 cri.go:89] found id: ""
	I1014 15:04:26.931103   72639 logs.go:282] 0 containers: []
	W1014 15:04:26.931114   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:26.931122   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:26.931174   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:26.966890   72639 cri.go:89] found id: ""
	I1014 15:04:26.966923   72639 logs.go:282] 0 containers: []
	W1014 15:04:26.966933   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:26.966941   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:26.967002   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:27.001338   72639 cri.go:89] found id: ""
	I1014 15:04:27.001368   72639 logs.go:282] 0 containers: []
	W1014 15:04:27.001379   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:27.001386   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:27.001454   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:27.041798   72639 cri.go:89] found id: ""
	I1014 15:04:27.041830   72639 logs.go:282] 0 containers: []
	W1014 15:04:27.041839   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:27.041844   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:27.041905   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:27.080248   72639 cri.go:89] found id: ""
	I1014 15:04:27.080279   72639 logs.go:282] 0 containers: []
	W1014 15:04:27.080288   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:27.080293   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:27.080341   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:27.116207   72639 cri.go:89] found id: ""
	I1014 15:04:27.116234   72639 logs.go:282] 0 containers: []
	W1014 15:04:27.116242   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:27.116250   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:27.116264   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:27.191149   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:27.191174   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:27.191203   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:27.275771   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:27.275808   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:27.323223   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:27.323254   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:27.375409   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:27.375455   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:29.890408   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:29.904797   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:29.904853   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:29.938655   72639 cri.go:89] found id: ""
	I1014 15:04:29.938685   72639 logs.go:282] 0 containers: []
	W1014 15:04:29.938698   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:29.938705   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:29.938765   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:29.976477   72639 cri.go:89] found id: ""
	I1014 15:04:29.976508   72639 logs.go:282] 0 containers: []
	W1014 15:04:29.976519   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:29.976526   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:29.976583   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:30.014813   72639 cri.go:89] found id: ""
	I1014 15:04:30.014842   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.014853   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:30.014860   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:30.014926   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:30.050804   72639 cri.go:89] found id: ""
	I1014 15:04:30.050833   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.050844   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:30.050854   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:30.050918   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:30.087921   72639 cri.go:89] found id: ""
	I1014 15:04:30.087946   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.087954   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:30.087959   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:30.088016   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:30.125411   72639 cri.go:89] found id: ""
	I1014 15:04:30.125446   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.125458   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:30.125465   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:30.125519   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:30.162067   72639 cri.go:89] found id: ""
	I1014 15:04:30.162099   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.162110   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:30.162118   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:30.162181   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:30.200376   72639 cri.go:89] found id: ""
	I1014 15:04:30.200406   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.200418   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:30.200435   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:30.200451   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:30.279965   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:30.279992   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:30.280007   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:30.364866   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:30.364900   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:30.408808   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:30.408842   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:30.464473   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:30.464507   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:32.980254   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:32.994254   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:32.994320   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:33.035996   72639 cri.go:89] found id: ""
	I1014 15:04:33.036025   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.036036   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:33.036043   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:33.036103   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:33.077494   72639 cri.go:89] found id: ""
	I1014 15:04:33.077522   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.077531   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:33.077538   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:33.077585   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:33.112666   72639 cri.go:89] found id: ""
	I1014 15:04:33.112695   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.112705   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:33.112711   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:33.112772   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:33.150229   72639 cri.go:89] found id: ""
	I1014 15:04:33.150266   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.150276   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:33.150282   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:33.150336   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:33.186960   72639 cri.go:89] found id: ""
	I1014 15:04:33.186989   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.187001   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:33.187008   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:33.187062   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:33.223596   72639 cri.go:89] found id: ""
	I1014 15:04:33.223631   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.223641   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:33.223647   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:33.223711   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:33.260137   72639 cri.go:89] found id: ""
	I1014 15:04:33.260162   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.260170   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:33.260175   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:33.260228   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:33.298072   72639 cri.go:89] found id: ""
	I1014 15:04:33.298095   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.298103   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:33.298110   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:33.298121   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:33.379587   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:33.379623   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:33.423427   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:33.423456   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:33.474644   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:33.474683   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:33.488324   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:33.488354   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:33.556257   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:36.056955   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:36.072461   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:36.072536   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:36.109467   72639 cri.go:89] found id: ""
	I1014 15:04:36.109493   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.109502   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:36.109509   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:36.109561   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:36.147985   72639 cri.go:89] found id: ""
	I1014 15:04:36.148012   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.148020   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:36.148025   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:36.148071   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:36.183885   72639 cri.go:89] found id: ""
	I1014 15:04:36.183906   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.183914   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:36.183919   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:36.183968   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:36.220994   72639 cri.go:89] found id: ""
	I1014 15:04:36.221025   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.221036   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:36.221044   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:36.221108   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:36.256586   72639 cri.go:89] found id: ""
	I1014 15:04:36.256612   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.256621   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:36.256627   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:36.256683   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:36.293229   72639 cri.go:89] found id: ""
	I1014 15:04:36.293256   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.293265   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:36.293272   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:36.293339   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:36.329254   72639 cri.go:89] found id: ""
	I1014 15:04:36.329279   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.329290   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:36.329297   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:36.329357   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:36.366495   72639 cri.go:89] found id: ""
	I1014 15:04:36.366526   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.366538   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:36.366548   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:36.366561   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:36.420985   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:36.421018   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:36.435532   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:36.435565   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:36.510459   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:36.510484   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:36.510499   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:36.593057   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:36.593094   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:39.138570   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:39.152280   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:39.152342   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:39.186647   72639 cri.go:89] found id: ""
	I1014 15:04:39.186676   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.186687   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:39.186694   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:39.186754   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:39.223560   72639 cri.go:89] found id: ""
	I1014 15:04:39.223586   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.223594   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:39.223599   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:39.223644   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:39.257835   72639 cri.go:89] found id: ""
	I1014 15:04:39.257867   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.257879   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:39.257886   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:39.257947   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:39.294656   72639 cri.go:89] found id: ""
	I1014 15:04:39.294684   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.294692   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:39.294699   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:39.294750   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:39.333474   72639 cri.go:89] found id: ""
	I1014 15:04:39.333503   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.333513   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:39.333520   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:39.333586   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:39.374385   72639 cri.go:89] found id: ""
	I1014 15:04:39.374414   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.374424   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:39.374435   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:39.374483   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:39.412856   72639 cri.go:89] found id: ""
	I1014 15:04:39.412888   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.412899   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:39.412906   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:39.412966   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:39.463087   72639 cri.go:89] found id: ""
	I1014 15:04:39.463115   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.463127   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:39.463138   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:39.463154   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:39.514309   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:39.514342   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:39.528947   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:39.528972   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:39.603984   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:39.604004   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:39.604016   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:39.685053   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:39.685093   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:42.234178   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:42.247421   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:42.247497   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:42.288496   72639 cri.go:89] found id: ""
	I1014 15:04:42.288521   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.288529   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:42.288535   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:42.288588   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:42.324346   72639 cri.go:89] found id: ""
	I1014 15:04:42.324382   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.324394   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:42.324401   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:42.324469   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:42.362879   72639 cri.go:89] found id: ""
	I1014 15:04:42.362910   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.362922   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:42.362928   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:42.362991   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:42.399347   72639 cri.go:89] found id: ""
	I1014 15:04:42.399375   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.399383   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:42.399389   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:42.399473   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:42.434942   72639 cri.go:89] found id: ""
	I1014 15:04:42.434971   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.434990   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:42.434999   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:42.435063   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:42.470886   72639 cri.go:89] found id: ""
	I1014 15:04:42.470916   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.470928   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:42.470934   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:42.470994   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:42.510713   72639 cri.go:89] found id: ""
	I1014 15:04:42.510742   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.510752   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:42.510758   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:42.510820   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:42.544506   72639 cri.go:89] found id: ""
	I1014 15:04:42.544538   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.544547   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:42.544559   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:42.544570   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:42.588658   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:42.588694   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:42.642165   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:42.642198   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:42.658073   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:42.658110   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:42.730486   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:42.730510   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:42.730524   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:45.307806   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:45.321664   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:45.321733   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:45.359670   72639 cri.go:89] found id: ""
	I1014 15:04:45.359697   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.359708   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:45.359715   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:45.359781   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:45.398673   72639 cri.go:89] found id: ""
	I1014 15:04:45.398703   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.398715   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:45.398722   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:45.398784   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:45.441656   72639 cri.go:89] found id: ""
	I1014 15:04:45.441685   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.441697   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:45.441705   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:45.441768   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:45.476159   72639 cri.go:89] found id: ""
	I1014 15:04:45.476188   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.476195   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:45.476201   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:45.476263   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:45.513776   72639 cri.go:89] found id: ""
	I1014 15:04:45.513807   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.513819   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:45.513828   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:45.513894   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:45.550336   72639 cri.go:89] found id: ""
	I1014 15:04:45.550371   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.550382   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:45.550388   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:45.550450   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:45.586668   72639 cri.go:89] found id: ""
	I1014 15:04:45.586697   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.586705   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:45.586711   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:45.586760   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:45.622530   72639 cri.go:89] found id: ""
	I1014 15:04:45.622559   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.622568   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:45.622576   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:45.622589   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:45.674471   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:45.674504   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:45.690430   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:45.690463   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:45.772133   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:45.772165   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:45.772181   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:45.859835   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:45.859880   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:48.434011   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:48.448747   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:48.448826   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:48.493642   72639 cri.go:89] found id: ""
	I1014 15:04:48.493668   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.493680   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:48.493687   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:48.493747   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:48.530298   72639 cri.go:89] found id: ""
	I1014 15:04:48.530327   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.530336   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:48.530344   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:48.530403   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:48.566215   72639 cri.go:89] found id: ""
	I1014 15:04:48.566242   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.566252   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:48.566261   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:48.566325   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:48.604528   72639 cri.go:89] found id: ""
	I1014 15:04:48.604553   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.604561   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:48.604566   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:48.604616   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:48.646152   72639 cri.go:89] found id: ""
	I1014 15:04:48.646180   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.646191   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:48.646198   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:48.646257   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:48.682670   72639 cri.go:89] found id: ""
	I1014 15:04:48.682696   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.682704   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:48.682711   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:48.682762   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:48.722292   72639 cri.go:89] found id: ""
	I1014 15:04:48.722318   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.722326   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:48.722335   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:48.722400   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:48.762474   72639 cri.go:89] found id: ""
	I1014 15:04:48.762506   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.762518   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:48.762528   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:48.762553   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:48.776628   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:48.776652   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:48.849904   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:48.849928   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:48.849941   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:48.927033   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:48.927068   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:48.970775   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:48.970807   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:51.521113   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:51.535318   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:51.535389   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:51.582631   72639 cri.go:89] found id: ""
	I1014 15:04:51.582658   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.582666   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:51.582671   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:51.582721   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:51.655323   72639 cri.go:89] found id: ""
	I1014 15:04:51.655362   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.655371   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:51.655376   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:51.655433   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:51.722837   72639 cri.go:89] found id: ""
	I1014 15:04:51.722863   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.722875   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:51.722882   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:51.722939   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:51.759917   72639 cri.go:89] found id: ""
	I1014 15:04:51.759946   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.759957   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:51.759963   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:51.760023   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:51.798656   72639 cri.go:89] found id: ""
	I1014 15:04:51.798689   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.798702   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:51.798711   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:51.798777   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:51.839285   72639 cri.go:89] found id: ""
	I1014 15:04:51.839312   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.839324   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:51.839334   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:51.839391   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:51.876997   72639 cri.go:89] found id: ""
	I1014 15:04:51.877028   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.877038   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:51.877045   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:51.877091   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:51.913991   72639 cri.go:89] found id: ""
	I1014 15:04:51.914020   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.914028   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:51.914036   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:51.914046   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:51.993392   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:51.993427   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:52.039722   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:52.039756   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:52.090901   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:52.090937   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:52.105014   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:52.105052   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:52.175505   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:54.676549   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:54.690113   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:54.690204   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:54.726478   72639 cri.go:89] found id: ""
	I1014 15:04:54.726511   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.726523   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:54.726538   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:54.726611   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:54.764990   72639 cri.go:89] found id: ""
	I1014 15:04:54.765017   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.765025   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:54.765031   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:54.765095   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:54.804779   72639 cri.go:89] found id: ""
	I1014 15:04:54.804808   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.804819   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:54.804828   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:54.804886   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:54.848657   72639 cri.go:89] found id: ""
	I1014 15:04:54.848682   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.848698   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:54.848705   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:54.848765   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:54.886806   72639 cri.go:89] found id: ""
	I1014 15:04:54.886834   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.886845   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:54.886853   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:54.886912   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:54.923297   72639 cri.go:89] found id: ""
	I1014 15:04:54.923323   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.923330   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:54.923335   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:54.923380   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:54.966297   72639 cri.go:89] found id: ""
	I1014 15:04:54.966321   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.966329   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:54.966334   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:54.966382   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:55.012047   72639 cri.go:89] found id: ""
	I1014 15:04:55.012071   72639 logs.go:282] 0 containers: []
	W1014 15:04:55.012079   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:55.012087   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:55.012097   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:55.066031   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:55.066063   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:55.080954   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:55.080981   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:55.159644   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:55.159670   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:55.159683   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:55.243303   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:55.243341   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:57.784555   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:57.799051   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:57.799132   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:57.841084   72639 cri.go:89] found id: ""
	I1014 15:04:57.841108   72639 logs.go:282] 0 containers: []
	W1014 15:04:57.841115   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:57.841121   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:57.841167   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:57.881510   72639 cri.go:89] found id: ""
	I1014 15:04:57.881542   72639 logs.go:282] 0 containers: []
	W1014 15:04:57.881555   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:57.881562   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:57.881624   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:57.916893   72639 cri.go:89] found id: ""
	I1014 15:04:57.916923   72639 logs.go:282] 0 containers: []
	W1014 15:04:57.916934   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:57.916940   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:57.916988   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:57.956991   72639 cri.go:89] found id: ""
	I1014 15:04:57.957023   72639 logs.go:282] 0 containers: []
	W1014 15:04:57.957036   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:57.957046   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:57.957118   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:57.993765   72639 cri.go:89] found id: ""
	I1014 15:04:57.993792   72639 logs.go:282] 0 containers: []
	W1014 15:04:57.993803   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:57.993809   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:57.993869   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:58.032044   72639 cri.go:89] found id: ""
	I1014 15:04:58.032085   72639 logs.go:282] 0 containers: []
	W1014 15:04:58.032098   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:58.032107   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:58.032173   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:58.069733   72639 cri.go:89] found id: ""
	I1014 15:04:58.069754   72639 logs.go:282] 0 containers: []
	W1014 15:04:58.069762   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:58.069767   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:58.069813   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:58.105851   72639 cri.go:89] found id: ""
	I1014 15:04:58.105880   72639 logs.go:282] 0 containers: []
	W1014 15:04:58.105891   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:58.105901   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:58.105914   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:58.159922   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:58.159956   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:58.173779   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:58.173802   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:58.253551   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:58.253576   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:58.253591   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:58.342607   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:58.342647   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:00.884705   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:00.900147   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:00.900215   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:00.940372   72639 cri.go:89] found id: ""
	I1014 15:05:00.940402   72639 logs.go:282] 0 containers: []
	W1014 15:05:00.940413   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:00.940420   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:00.940489   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:00.981400   72639 cri.go:89] found id: ""
	I1014 15:05:00.981431   72639 logs.go:282] 0 containers: []
	W1014 15:05:00.981441   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:00.981447   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:00.981517   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:01.021981   72639 cri.go:89] found id: ""
	I1014 15:05:01.022002   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.022011   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:01.022016   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:01.022067   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:01.056976   72639 cri.go:89] found id: ""
	I1014 15:05:01.057005   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.057013   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:01.057020   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:01.057063   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:01.092702   72639 cri.go:89] found id: ""
	I1014 15:05:01.092732   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.092739   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:01.092745   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:01.092803   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:01.128861   72639 cri.go:89] found id: ""
	I1014 15:05:01.128892   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.128902   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:01.128908   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:01.128958   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:01.162672   72639 cri.go:89] found id: ""
	I1014 15:05:01.162702   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.162712   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:01.162719   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:01.162791   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:01.202724   72639 cri.go:89] found id: ""
	I1014 15:05:01.202751   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.202761   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:01.202770   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:01.202785   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:01.280702   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:01.280723   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:01.280735   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:01.362909   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:01.362943   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:01.406737   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:01.406766   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:01.460090   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:01.460125   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:03.975661   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:03.989811   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:03.989874   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:04.028396   72639 cri.go:89] found id: ""
	I1014 15:05:04.028426   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.028438   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:04.028445   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:04.028499   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:04.065871   72639 cri.go:89] found id: ""
	I1014 15:05:04.065901   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.065912   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:04.065919   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:04.065980   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:04.103155   72639 cri.go:89] found id: ""
	I1014 15:05:04.103184   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.103192   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:04.103198   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:04.103248   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:04.139503   72639 cri.go:89] found id: ""
	I1014 15:05:04.139531   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.139539   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:04.139545   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:04.139601   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:04.171638   72639 cri.go:89] found id: ""
	I1014 15:05:04.171663   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.171671   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:04.171676   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:04.171734   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:04.213720   72639 cri.go:89] found id: ""
	I1014 15:05:04.213751   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.213760   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:04.213766   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:04.213815   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:04.248088   72639 cri.go:89] found id: ""
	I1014 15:05:04.248109   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.248117   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:04.248121   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:04.248183   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:04.286454   72639 cri.go:89] found id: ""
	I1014 15:05:04.286479   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.286487   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:04.286495   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:04.286506   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:04.339564   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:04.339599   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:04.353034   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:04.353061   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:04.432764   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:04.432786   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:04.432797   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:04.514561   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:04.514613   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:07.057507   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:07.072798   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:07.072873   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:07.113672   72639 cri.go:89] found id: ""
	I1014 15:05:07.113694   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.113701   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:07.113706   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:07.113761   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:07.149321   72639 cri.go:89] found id: ""
	I1014 15:05:07.149348   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.149357   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:07.149362   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:07.149416   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:07.185717   72639 cri.go:89] found id: ""
	I1014 15:05:07.185748   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.185760   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:07.185768   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:07.185822   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:07.225747   72639 cri.go:89] found id: ""
	I1014 15:05:07.225772   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.225783   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:07.225791   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:07.225843   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:07.265834   72639 cri.go:89] found id: ""
	I1014 15:05:07.265864   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.265875   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:07.265882   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:07.265944   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:07.300595   72639 cri.go:89] found id: ""
	I1014 15:05:07.300622   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.300631   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:07.300637   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:07.300686   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:07.343249   72639 cri.go:89] found id: ""
	I1014 15:05:07.343280   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.343291   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:07.343298   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:07.343365   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:07.379525   72639 cri.go:89] found id: ""
	I1014 15:05:07.379549   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.379557   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:07.379564   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:07.379576   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:07.393622   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:07.393653   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:07.473973   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:07.473998   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:07.474013   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:07.556937   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:07.556971   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:07.602224   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:07.602249   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:10.156920   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:10.170971   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:10.171037   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:10.206568   72639 cri.go:89] found id: ""
	I1014 15:05:10.206610   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.206623   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:10.206630   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:10.206689   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:10.249075   72639 cri.go:89] found id: ""
	I1014 15:05:10.249101   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.249110   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:10.249121   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:10.249175   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:10.285620   72639 cri.go:89] found id: ""
	I1014 15:05:10.285649   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.285660   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:10.285667   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:10.285730   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:10.322291   72639 cri.go:89] found id: ""
	I1014 15:05:10.322314   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.322322   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:10.322327   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:10.322379   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:10.356691   72639 cri.go:89] found id: ""
	I1014 15:05:10.356720   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.356730   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:10.356738   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:10.356802   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:10.401192   72639 cri.go:89] found id: ""
	I1014 15:05:10.401223   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.401234   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:10.401242   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:10.401303   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:10.438198   72639 cri.go:89] found id: ""
	I1014 15:05:10.438225   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.438236   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:10.438243   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:10.438380   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:10.474142   72639 cri.go:89] found id: ""
	I1014 15:05:10.474166   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.474174   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:10.474181   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:10.474193   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:10.546549   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:10.546569   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:10.546582   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:10.624235   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:10.624268   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:10.664896   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:10.664926   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:10.719425   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:10.719464   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:13.234162   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:13.247614   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:13.247689   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:13.285040   72639 cri.go:89] found id: ""
	I1014 15:05:13.285068   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.285078   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:13.285086   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:13.285154   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:13.334084   72639 cri.go:89] found id: ""
	I1014 15:05:13.334125   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.334133   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:13.334139   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:13.334204   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:13.369164   72639 cri.go:89] found id: ""
	I1014 15:05:13.369199   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.369211   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:13.369223   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:13.369285   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:13.405202   72639 cri.go:89] found id: ""
	I1014 15:05:13.405232   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.405244   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:13.405252   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:13.405304   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:13.443271   72639 cri.go:89] found id: ""
	I1014 15:05:13.443302   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.443311   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:13.443317   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:13.443369   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:13.483541   72639 cri.go:89] found id: ""
	I1014 15:05:13.483570   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.483580   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:13.483588   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:13.483650   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:13.518580   72639 cri.go:89] found id: ""
	I1014 15:05:13.518622   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.518633   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:13.518641   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:13.518701   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:13.553638   72639 cri.go:89] found id: ""
	I1014 15:05:13.553668   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.553678   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:13.553688   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:13.553702   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:13.605379   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:13.605413   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:13.620525   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:13.620556   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:13.699628   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:13.699658   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:13.699672   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:13.778006   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:13.778046   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:16.316703   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:16.331511   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:16.331577   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:16.367045   72639 cri.go:89] found id: ""
	I1014 15:05:16.367075   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.367083   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:16.367089   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:16.367144   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:16.403240   72639 cri.go:89] found id: ""
	I1014 15:05:16.403264   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.403274   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:16.403285   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:16.403344   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:16.438570   72639 cri.go:89] found id: ""
	I1014 15:05:16.438612   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.438625   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:16.438632   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:16.438694   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:16.477153   72639 cri.go:89] found id: ""
	I1014 15:05:16.477174   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.477182   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:16.477187   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:16.477232   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:16.516308   72639 cri.go:89] found id: ""
	I1014 15:05:16.516336   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.516348   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:16.516355   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:16.516421   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:16.551337   72639 cri.go:89] found id: ""
	I1014 15:05:16.551365   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.551375   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:16.551383   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:16.551450   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:16.587073   72639 cri.go:89] found id: ""
	I1014 15:05:16.587105   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.587117   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:16.587125   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:16.587183   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:16.623940   72639 cri.go:89] found id: ""
	I1014 15:05:16.623962   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.623970   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:16.623978   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:16.623989   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:16.671593   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:16.671618   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:16.723057   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:16.723092   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:16.737623   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:16.737656   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:16.809539   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:16.809569   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:16.809592   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:19.390406   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:19.404850   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:19.404928   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:19.446931   72639 cri.go:89] found id: ""
	I1014 15:05:19.446962   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.446973   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:19.446980   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:19.447043   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:19.488112   72639 cri.go:89] found id: ""
	I1014 15:05:19.488136   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.488144   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:19.488150   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:19.488208   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:19.523333   72639 cri.go:89] found id: ""
	I1014 15:05:19.523365   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.523382   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:19.523389   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:19.523447   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:19.557887   72639 cri.go:89] found id: ""
	I1014 15:05:19.557910   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.557918   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:19.557927   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:19.557972   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:19.593792   72639 cri.go:89] found id: ""
	I1014 15:05:19.593815   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.593822   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:19.593873   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:19.593922   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:19.628291   72639 cri.go:89] found id: ""
	I1014 15:05:19.628324   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.628335   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:19.628343   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:19.628405   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:19.664088   72639 cri.go:89] found id: ""
	I1014 15:05:19.664118   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.664130   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:19.664138   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:19.664211   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:19.700825   72639 cri.go:89] found id: ""
	I1014 15:05:19.700853   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.700863   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:19.700873   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:19.700886   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:19.741631   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:19.741666   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:19.792667   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:19.792706   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:19.806928   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:19.806965   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:19.880030   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:19.880059   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:19.880073   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:22.465251   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:22.479031   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:22.479096   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:22.519123   72639 cri.go:89] found id: ""
	I1014 15:05:22.519147   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.519158   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:22.519171   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:22.519235   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:22.552250   72639 cri.go:89] found id: ""
	I1014 15:05:22.552277   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.552287   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:22.552294   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:22.552354   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:22.594213   72639 cri.go:89] found id: ""
	I1014 15:05:22.594243   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.594253   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:22.594261   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:22.594310   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:22.630081   72639 cri.go:89] found id: ""
	I1014 15:05:22.630110   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.630121   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:22.630129   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:22.630195   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:22.665454   72639 cri.go:89] found id: ""
	I1014 15:05:22.665485   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.665497   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:22.665505   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:22.665568   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:22.710697   72639 cri.go:89] found id: ""
	I1014 15:05:22.710725   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.710734   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:22.710742   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:22.710798   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:22.748486   72639 cri.go:89] found id: ""
	I1014 15:05:22.748516   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.748527   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:22.748534   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:22.748594   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:22.784646   72639 cri.go:89] found id: ""
	I1014 15:05:22.784674   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.784684   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:22.784695   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:22.784709   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:22.797853   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:22.797880   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:22.875382   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:22.875406   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:22.875422   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:22.957055   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:22.957089   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:23.008642   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:23.008672   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:25.561277   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:25.575543   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:25.575606   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:25.614260   72639 cri.go:89] found id: ""
	I1014 15:05:25.614283   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.614291   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:25.614296   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:25.614353   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:25.654267   72639 cri.go:89] found id: ""
	I1014 15:05:25.654295   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.654307   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:25.654314   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:25.654385   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:25.707597   72639 cri.go:89] found id: ""
	I1014 15:05:25.707626   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.707637   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:25.707644   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:25.707707   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:25.747477   72639 cri.go:89] found id: ""
	I1014 15:05:25.747500   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.747508   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:25.747513   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:25.747571   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:25.785245   72639 cri.go:89] found id: ""
	I1014 15:05:25.785270   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.785279   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:25.785288   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:25.785342   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:25.820619   72639 cri.go:89] found id: ""
	I1014 15:05:25.820643   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.820651   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:25.820665   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:25.820722   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:25.861644   72639 cri.go:89] found id: ""
	I1014 15:05:25.861665   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.861673   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:25.861678   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:25.861724   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:25.901009   72639 cri.go:89] found id: ""
	I1014 15:05:25.901032   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.901046   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:25.901056   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:25.901068   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:25.942918   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:25.942941   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:25.993931   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:25.993964   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:26.008252   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:26.008280   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:26.087316   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:26.087336   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:26.087347   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:28.667377   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:28.682586   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:28.682682   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:28.729576   72639 cri.go:89] found id: ""
	I1014 15:05:28.729600   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.729608   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:28.729614   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:28.729673   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:28.766637   72639 cri.go:89] found id: ""
	I1014 15:05:28.766669   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.766682   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:28.766690   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:28.766762   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:28.802280   72639 cri.go:89] found id: ""
	I1014 15:05:28.802308   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.802317   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:28.802322   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:28.802395   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:28.840788   72639 cri.go:89] found id: ""
	I1014 15:05:28.840822   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.840833   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:28.840841   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:28.840898   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:28.878403   72639 cri.go:89] found id: ""
	I1014 15:05:28.878437   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.878447   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:28.878453   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:28.878505   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:28.919054   72639 cri.go:89] found id: ""
	I1014 15:05:28.919082   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.919090   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:28.919096   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:28.919146   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:28.955097   72639 cri.go:89] found id: ""
	I1014 15:05:28.955124   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.955134   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:28.955142   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:28.955214   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:28.995681   72639 cri.go:89] found id: ""
	I1014 15:05:28.995711   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.995722   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:28.995731   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:28.995746   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:29.073041   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:29.073066   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:29.073083   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:29.152803   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:29.152838   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:29.192205   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:29.192239   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:29.248128   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:29.248166   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:31.762647   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:31.776372   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:31.776454   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:31.812234   72639 cri.go:89] found id: ""
	I1014 15:05:31.812259   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.812268   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:31.812275   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:31.812347   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:31.850248   72639 cri.go:89] found id: ""
	I1014 15:05:31.850277   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.850294   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:31.850301   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:31.850363   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:31.887768   72639 cri.go:89] found id: ""
	I1014 15:05:31.887796   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.887808   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:31.887816   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:31.887870   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:31.923434   72639 cri.go:89] found id: ""
	I1014 15:05:31.923464   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.923476   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:31.923483   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:31.923547   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:31.961027   72639 cri.go:89] found id: ""
	I1014 15:05:31.961055   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.961066   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:31.961073   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:31.961135   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:31.996222   72639 cri.go:89] found id: ""
	I1014 15:05:31.996250   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.996260   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:31.996267   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:31.996329   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:32.034396   72639 cri.go:89] found id: ""
	I1014 15:05:32.034441   72639 logs.go:282] 0 containers: []
	W1014 15:05:32.034452   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:32.034460   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:32.034528   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:32.080105   72639 cri.go:89] found id: ""
	I1014 15:05:32.080142   72639 logs.go:282] 0 containers: []
	W1014 15:05:32.080153   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:32.080164   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:32.080178   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:32.161120   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:32.161151   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:32.213511   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:32.213546   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:32.271250   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:32.271287   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:32.285452   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:32.285483   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:32.366108   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:34.867317   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:34.882058   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:34.882125   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:34.926220   72639 cri.go:89] found id: ""
	I1014 15:05:34.926251   72639 logs.go:282] 0 containers: []
	W1014 15:05:34.926261   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:34.926268   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:34.926341   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:34.965657   72639 cri.go:89] found id: ""
	I1014 15:05:34.965691   72639 logs.go:282] 0 containers: []
	W1014 15:05:34.965702   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:34.965709   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:34.965775   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:35.002422   72639 cri.go:89] found id: ""
	I1014 15:05:35.002446   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.002454   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:35.002459   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:35.002523   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:35.040029   72639 cri.go:89] found id: ""
	I1014 15:05:35.040057   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.040067   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:35.040073   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:35.040137   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:35.077041   72639 cri.go:89] found id: ""
	I1014 15:05:35.077067   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.077075   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:35.077080   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:35.077129   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:35.113723   72639 cri.go:89] found id: ""
	I1014 15:05:35.113754   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.113763   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:35.113770   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:35.113854   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:35.152003   72639 cri.go:89] found id: ""
	I1014 15:05:35.152025   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.152033   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:35.152038   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:35.152084   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:35.186707   72639 cri.go:89] found id: ""
	I1014 15:05:35.186735   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.186746   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:35.186756   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:35.186769   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:35.267899   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:35.267941   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:35.310382   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:35.310414   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:35.364811   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:35.364852   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:35.378359   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:35.378386   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:35.453522   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:37.953807   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:37.967515   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:37.967579   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:38.007923   72639 cri.go:89] found id: ""
	I1014 15:05:38.007955   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.007964   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:38.007969   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:38.008023   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:38.047451   72639 cri.go:89] found id: ""
	I1014 15:05:38.047476   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.047484   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:38.047490   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:38.047542   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:38.087141   72639 cri.go:89] found id: ""
	I1014 15:05:38.087165   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.087174   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:38.087186   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:38.087234   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:38.126556   72639 cri.go:89] found id: ""
	I1014 15:05:38.126583   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.126604   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:38.126612   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:38.126670   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:38.165318   72639 cri.go:89] found id: ""
	I1014 15:05:38.165341   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.165350   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:38.165356   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:38.165400   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:38.199498   72639 cri.go:89] found id: ""
	I1014 15:05:38.199533   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.199544   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:38.199553   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:38.199618   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:38.235030   72639 cri.go:89] found id: ""
	I1014 15:05:38.235058   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.235067   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:38.235072   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:38.235129   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:38.268900   72639 cri.go:89] found id: ""
	I1014 15:05:38.268926   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.268935   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:38.268943   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:38.268957   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:38.282503   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:38.282532   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:38.357943   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:38.357972   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:38.357987   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:38.448417   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:38.448453   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:38.490023   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:38.490049   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:41.045691   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:41.061188   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:41.061251   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:41.102885   72639 cri.go:89] found id: ""
	I1014 15:05:41.102909   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.102917   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:41.102923   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:41.102971   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:41.139402   72639 cri.go:89] found id: ""
	I1014 15:05:41.139427   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.139437   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:41.139444   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:41.139501   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:41.179881   72639 cri.go:89] found id: ""
	I1014 15:05:41.179926   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.179939   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:41.179946   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:41.180008   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:41.215861   72639 cri.go:89] found id: ""
	I1014 15:05:41.215897   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.215910   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:41.215919   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:41.215987   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:41.251314   72639 cri.go:89] found id: ""
	I1014 15:05:41.251341   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.251351   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:41.251355   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:41.251404   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:41.285986   72639 cri.go:89] found id: ""
	I1014 15:05:41.286010   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.286017   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:41.286025   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:41.286071   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:41.323730   72639 cri.go:89] found id: ""
	I1014 15:05:41.323756   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.323764   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:41.323769   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:41.323816   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:41.360787   72639 cri.go:89] found id: ""
	I1014 15:05:41.360817   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.360825   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:41.360834   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:41.360847   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:41.403137   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:41.403172   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:41.459217   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:41.459253   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:41.473529   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:41.473558   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:41.547384   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:41.547405   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:41.547416   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:44.129494   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:44.144061   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:44.144129   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:44.185872   72639 cri.go:89] found id: ""
	I1014 15:05:44.185896   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.185904   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:44.185909   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:44.185955   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:44.222618   72639 cri.go:89] found id: ""
	I1014 15:05:44.222648   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.222658   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:44.222663   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:44.222723   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:44.260730   72639 cri.go:89] found id: ""
	I1014 15:05:44.260761   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.260773   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:44.260780   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:44.260872   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:44.303033   72639 cri.go:89] found id: ""
	I1014 15:05:44.303124   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.303141   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:44.303150   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:44.303223   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:44.344573   72639 cri.go:89] found id: ""
	I1014 15:05:44.344600   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.344609   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:44.344614   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:44.344660   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:44.386091   72639 cri.go:89] found id: ""
	I1014 15:05:44.386122   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.386131   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:44.386137   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:44.386199   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:44.424609   72639 cri.go:89] found id: ""
	I1014 15:05:44.424634   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.424644   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:44.424656   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:44.424724   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:44.463997   72639 cri.go:89] found id: ""
	I1014 15:05:44.464023   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.464033   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:44.464043   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:44.464057   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:44.516883   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:44.516921   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:44.530785   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:44.530820   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:44.605202   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:44.605229   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:44.605245   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:44.685277   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:44.685312   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:47.227851   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:47.242737   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:47.242817   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:47.279395   72639 cri.go:89] found id: ""
	I1014 15:05:47.279421   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.279428   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:47.279434   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:47.279495   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:47.315002   72639 cri.go:89] found id: ""
	I1014 15:05:47.315032   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.315043   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:47.315050   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:47.315120   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:47.354133   72639 cri.go:89] found id: ""
	I1014 15:05:47.354162   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.354173   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:47.354180   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:47.354245   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:47.389394   72639 cri.go:89] found id: ""
	I1014 15:05:47.389419   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.389427   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:47.389439   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:47.389498   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:47.426564   72639 cri.go:89] found id: ""
	I1014 15:05:47.426592   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.426619   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:47.426627   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:47.426676   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:47.466953   72639 cri.go:89] found id: ""
	I1014 15:05:47.466980   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.466989   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:47.466996   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:47.467065   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:47.508563   72639 cri.go:89] found id: ""
	I1014 15:05:47.508595   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.508605   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:47.508613   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:47.508665   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:47.548974   72639 cri.go:89] found id: ""
	I1014 15:05:47.549002   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.549012   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:47.549022   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:47.549036   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:47.604768   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:47.604799   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:47.619681   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:47.619717   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:47.692479   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:47.692506   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:47.692522   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:47.773711   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:47.773751   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:50.314509   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:50.330883   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:50.330958   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:50.375090   72639 cri.go:89] found id: ""
	I1014 15:05:50.375121   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.375133   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:50.375140   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:50.375201   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:50.415000   72639 cri.go:89] found id: ""
	I1014 15:05:50.415031   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.415041   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:50.415048   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:50.415099   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:50.453937   72639 cri.go:89] found id: ""
	I1014 15:05:50.453967   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.453976   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:50.453983   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:50.454047   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:50.498752   72639 cri.go:89] found id: ""
	I1014 15:05:50.498778   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.498785   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:50.498790   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:50.498858   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:50.537819   72639 cri.go:89] found id: ""
	I1014 15:05:50.537855   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.537864   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:50.537871   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:50.537920   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:50.577141   72639 cri.go:89] found id: ""
	I1014 15:05:50.577168   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.577179   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:50.577186   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:50.577250   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:50.612462   72639 cri.go:89] found id: ""
	I1014 15:05:50.612504   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.612527   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:50.612535   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:50.612597   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:50.648816   72639 cri.go:89] found id: ""
	I1014 15:05:50.648845   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.648855   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:50.648866   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:50.648879   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:50.662546   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:50.662578   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:50.733128   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:50.733152   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:50.733166   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:50.810884   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:50.810913   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:50.855878   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:50.855905   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:53.413608   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:53.428380   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:53.428453   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:53.463440   72639 cri.go:89] found id: ""
	I1014 15:05:53.463464   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.463473   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:53.463479   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:53.463534   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:53.499024   72639 cri.go:89] found id: ""
	I1014 15:05:53.499050   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.499058   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:53.499064   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:53.499121   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:53.534396   72639 cri.go:89] found id: ""
	I1014 15:05:53.534425   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.534435   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:53.534442   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:53.534504   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:53.571396   72639 cri.go:89] found id: ""
	I1014 15:05:53.571422   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.571432   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:53.571439   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:53.571496   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:53.606219   72639 cri.go:89] found id: ""
	I1014 15:05:53.606247   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.606254   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:53.606260   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:53.606309   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:53.644906   72639 cri.go:89] found id: ""
	I1014 15:05:53.644929   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.644938   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:53.644945   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:53.645005   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:53.684764   72639 cri.go:89] found id: ""
	I1014 15:05:53.684795   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.684808   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:53.684817   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:53.684872   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:53.720559   72639 cri.go:89] found id: ""
	I1014 15:05:53.720587   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.720596   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:53.720605   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:53.720626   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:53.773759   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:53.773798   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:53.787688   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:53.787717   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:53.863141   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:53.863163   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:53.863176   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:53.942949   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:53.942989   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:56.487207   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:56.500670   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:56.500730   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:56.533851   72639 cri.go:89] found id: ""
	I1014 15:05:56.533882   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.533894   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:56.533901   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:56.533964   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:56.573169   72639 cri.go:89] found id: ""
	I1014 15:05:56.573194   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.573201   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:56.573207   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:56.573260   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:56.608110   72639 cri.go:89] found id: ""
	I1014 15:05:56.608138   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.608151   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:56.608158   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:56.608218   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:56.646030   72639 cri.go:89] found id: ""
	I1014 15:05:56.646054   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.646061   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:56.646067   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:56.646112   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:56.689427   72639 cri.go:89] found id: ""
	I1014 15:05:56.689455   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.689465   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:56.689473   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:56.689528   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:56.723831   72639 cri.go:89] found id: ""
	I1014 15:05:56.723856   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.723865   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:56.723871   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:56.723928   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:56.756700   72639 cri.go:89] found id: ""
	I1014 15:05:56.756725   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.756734   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:56.756741   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:56.756808   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:56.788201   72639 cri.go:89] found id: ""
	I1014 15:05:56.788228   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.788235   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:56.788242   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:56.788253   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:56.847840   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:56.847876   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:56.861984   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:56.862016   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:56.933190   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:56.933214   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:56.933226   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:57.015909   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:57.015958   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:59.559421   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:59.575593   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:59.575673   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:59.611369   72639 cri.go:89] found id: ""
	I1014 15:05:59.611399   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.611409   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:59.611416   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:59.611485   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:59.645786   72639 cri.go:89] found id: ""
	I1014 15:05:59.645817   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.645827   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:59.645834   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:59.645895   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:59.681463   72639 cri.go:89] found id: ""
	I1014 15:05:59.681491   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.681499   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:59.681504   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:59.681553   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:59.723738   72639 cri.go:89] found id: ""
	I1014 15:05:59.723767   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.723775   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:59.723782   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:59.723845   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:59.763890   72639 cri.go:89] found id: ""
	I1014 15:05:59.763919   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.763958   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:59.763966   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:59.764027   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:59.802981   72639 cri.go:89] found id: ""
	I1014 15:05:59.803007   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.803015   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:59.803021   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:59.803074   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:59.841887   72639 cri.go:89] found id: ""
	I1014 15:05:59.841916   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.841927   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:59.841934   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:59.841989   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:59.877190   72639 cri.go:89] found id: ""
	I1014 15:05:59.877221   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.877231   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:59.877240   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:59.877254   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:59.890838   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:59.890864   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:59.970122   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:59.970147   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:59.970163   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:00.058994   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:00.059032   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:00.103227   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:00.103262   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:02.655437   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:02.671240   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:02.671307   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:02.708826   72639 cri.go:89] found id: ""
	I1014 15:06:02.708859   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.708871   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:02.708879   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:02.708943   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:02.744504   72639 cri.go:89] found id: ""
	I1014 15:06:02.744535   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.744546   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:02.744553   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:02.744615   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:02.781144   72639 cri.go:89] found id: ""
	I1014 15:06:02.781180   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.781193   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:02.781201   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:02.781281   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:02.819527   72639 cri.go:89] found id: ""
	I1014 15:06:02.819558   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.819567   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:02.819572   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:02.819630   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:02.855653   72639 cri.go:89] found id: ""
	I1014 15:06:02.855683   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.855693   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:02.855700   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:02.855761   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:02.900843   72639 cri.go:89] found id: ""
	I1014 15:06:02.900876   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.900888   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:02.900896   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:02.900961   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:02.941812   72639 cri.go:89] found id: ""
	I1014 15:06:02.941840   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.941851   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:02.941857   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:02.941919   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:02.980213   72639 cri.go:89] found id: ""
	I1014 15:06:02.980238   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.980246   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:02.980253   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:02.980265   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:03.034263   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:03.034301   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:03.048574   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:03.048606   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:03.121902   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:03.121925   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:03.121939   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:03.197407   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:03.197445   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:05.737723   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:05.751892   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:05.751959   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:05.789209   72639 cri.go:89] found id: ""
	I1014 15:06:05.789235   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.789242   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:05.789247   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:05.789294   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:05.826189   72639 cri.go:89] found id: ""
	I1014 15:06:05.826220   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.826229   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:05.826236   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:05.826344   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:05.864264   72639 cri.go:89] found id: ""
	I1014 15:06:05.864297   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.864308   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:05.864314   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:05.864371   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:05.899697   72639 cri.go:89] found id: ""
	I1014 15:06:05.899724   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.899732   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:05.899737   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:05.899784   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:05.939552   72639 cri.go:89] found id: ""
	I1014 15:06:05.939583   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.939593   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:05.939601   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:05.939668   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:05.999732   72639 cri.go:89] found id: ""
	I1014 15:06:05.999759   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.999770   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:05.999776   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:05.999834   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:06.036228   72639 cri.go:89] found id: ""
	I1014 15:06:06.036259   72639 logs.go:282] 0 containers: []
	W1014 15:06:06.036276   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:06.036284   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:06.036343   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:06.071744   72639 cri.go:89] found id: ""
	I1014 15:06:06.071774   72639 logs.go:282] 0 containers: []
	W1014 15:06:06.071785   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:06.071795   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:06.071808   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:06.125737   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:06.125774   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:06.139150   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:06.139177   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:06.206731   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:06.206757   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:06.206773   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:06.287183   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:06.287218   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:08.827345   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:08.841290   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:08.841384   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:08.877789   72639 cri.go:89] found id: ""
	I1014 15:06:08.877815   72639 logs.go:282] 0 containers: []
	W1014 15:06:08.877824   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:08.877832   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:08.877895   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:08.912491   72639 cri.go:89] found id: ""
	I1014 15:06:08.912517   72639 logs.go:282] 0 containers: []
	W1014 15:06:08.912525   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:08.912530   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:08.912586   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:08.948727   72639 cri.go:89] found id: ""
	I1014 15:06:08.948755   72639 logs.go:282] 0 containers: []
	W1014 15:06:08.948765   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:08.948773   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:08.948837   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:08.984397   72639 cri.go:89] found id: ""
	I1014 15:06:08.984428   72639 logs.go:282] 0 containers: []
	W1014 15:06:08.984440   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:08.984448   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:08.984498   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:09.019222   72639 cri.go:89] found id: ""
	I1014 15:06:09.019250   72639 logs.go:282] 0 containers: []
	W1014 15:06:09.019260   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:09.019268   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:09.019329   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:09.058309   72639 cri.go:89] found id: ""
	I1014 15:06:09.058335   72639 logs.go:282] 0 containers: []
	W1014 15:06:09.058346   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:09.058353   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:09.058415   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:09.096508   72639 cri.go:89] found id: ""
	I1014 15:06:09.096535   72639 logs.go:282] 0 containers: []
	W1014 15:06:09.096544   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:09.096550   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:09.096599   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:09.134564   72639 cri.go:89] found id: ""
	I1014 15:06:09.134611   72639 logs.go:282] 0 containers: []
	W1014 15:06:09.134624   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:09.134635   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:09.134647   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:09.188220   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:09.188254   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:09.203119   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:09.203149   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:09.279357   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:09.279379   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:09.279390   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:09.364219   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:09.364253   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:11.910976   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:11.926067   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:11.926149   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:11.966238   72639 cri.go:89] found id: ""
	I1014 15:06:11.966271   72639 logs.go:282] 0 containers: []
	W1014 15:06:11.966282   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:11.966289   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:11.966350   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:12.002580   72639 cri.go:89] found id: ""
	I1014 15:06:12.002617   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.002630   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:12.002637   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:12.002698   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:12.037014   72639 cri.go:89] found id: ""
	I1014 15:06:12.037037   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.037046   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:12.037051   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:12.037111   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:12.070937   72639 cri.go:89] found id: ""
	I1014 15:06:12.070957   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.070965   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:12.070970   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:12.071019   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:12.104920   72639 cri.go:89] found id: ""
	I1014 15:06:12.104949   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.104960   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:12.104967   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:12.105026   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:12.142498   72639 cri.go:89] found id: ""
	I1014 15:06:12.142530   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.142544   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:12.142555   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:12.142628   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:12.179590   72639 cri.go:89] found id: ""
	I1014 15:06:12.179613   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.179621   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:12.179627   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:12.179675   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:12.213947   72639 cri.go:89] found id: ""
	I1014 15:06:12.213973   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.213981   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:12.213989   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:12.213998   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:12.268214   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:12.268257   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:12.283561   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:12.283594   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:12.382344   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:12.382367   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:12.382377   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:12.469818   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:12.469854   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:15.011529   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:15.025355   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:15.025423   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:15.060996   72639 cri.go:89] found id: ""
	I1014 15:06:15.061028   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.061040   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:15.061047   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:15.061120   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:15.103050   72639 cri.go:89] found id: ""
	I1014 15:06:15.103074   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.103082   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:15.103088   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:15.103140   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:15.140095   72639 cri.go:89] found id: ""
	I1014 15:06:15.140122   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.140132   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:15.140139   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:15.140207   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:15.174612   72639 cri.go:89] found id: ""
	I1014 15:06:15.174642   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.174654   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:15.174669   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:15.174737   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:15.209116   72639 cri.go:89] found id: ""
	I1014 15:06:15.209142   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.209152   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:15.209160   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:15.209221   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:15.242857   72639 cri.go:89] found id: ""
	I1014 15:06:15.242885   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.242896   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:15.242902   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:15.242966   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:15.283038   72639 cri.go:89] found id: ""
	I1014 15:06:15.283066   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.283076   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:15.283083   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:15.283144   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:15.319577   72639 cri.go:89] found id: ""
	I1014 15:06:15.319604   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.319612   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:15.319622   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:15.319636   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:15.391485   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:15.391506   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:15.391520   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:15.470140   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:15.470192   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:15.513098   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:15.513132   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:15.568275   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:15.568305   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:18.085915   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:18.113889   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:18.113958   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:18.167486   72639 cri.go:89] found id: ""
	I1014 15:06:18.167511   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.167519   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:18.167525   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:18.167568   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:18.230244   72639 cri.go:89] found id: ""
	I1014 15:06:18.230273   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.230283   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:18.230291   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:18.230351   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:18.264223   72639 cri.go:89] found id: ""
	I1014 15:06:18.264252   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.264261   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:18.264268   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:18.264332   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:18.298719   72639 cri.go:89] found id: ""
	I1014 15:06:18.298750   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.298762   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:18.298770   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:18.298843   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:18.335113   72639 cri.go:89] found id: ""
	I1014 15:06:18.335140   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.335147   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:18.335153   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:18.335212   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:18.373690   72639 cri.go:89] found id: ""
	I1014 15:06:18.373721   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.373736   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:18.373743   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:18.373792   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:18.411138   72639 cri.go:89] found id: ""
	I1014 15:06:18.411171   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.411182   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:18.411190   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:18.411250   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:18.451281   72639 cri.go:89] found id: ""
	I1014 15:06:18.451306   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.451314   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:18.451323   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:18.451334   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:18.502141   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:18.502178   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:18.517449   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:18.517476   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:18.586737   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:18.586760   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:18.586776   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:18.670234   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:18.670270   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:21.210200   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:21.222998   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:21.223053   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:21.257132   72639 cri.go:89] found id: ""
	I1014 15:06:21.257160   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.257167   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:21.257174   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:21.257237   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:21.290905   72639 cri.go:89] found id: ""
	I1014 15:06:21.290933   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.290945   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:21.290952   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:21.291007   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:21.331067   72639 cri.go:89] found id: ""
	I1014 15:06:21.331098   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.331108   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:21.331128   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:21.331178   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:21.370042   72639 cri.go:89] found id: ""
	I1014 15:06:21.370069   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.370077   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:21.370083   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:21.370141   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:21.414900   72639 cri.go:89] found id: ""
	I1014 15:06:21.414920   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.414932   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:21.414938   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:21.414985   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:21.452914   72639 cri.go:89] found id: ""
	I1014 15:06:21.452941   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.452952   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:21.452960   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:21.453022   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:21.486725   72639 cri.go:89] found id: ""
	I1014 15:06:21.486752   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.486763   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:21.486770   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:21.486831   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:21.524012   72639 cri.go:89] found id: ""
	I1014 15:06:21.524034   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.524042   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:21.524049   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:21.524059   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:21.603238   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:21.603279   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:21.645655   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:21.645689   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:21.701053   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:21.701092   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:21.715515   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:21.715542   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:21.781831   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:24.282018   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:24.295177   72639 kubeadm.go:597] duration metric: took 4m4.450514459s to restartPrimaryControlPlane
	W1014 15:06:24.295255   72639 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1014 15:06:24.295283   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 15:06:29.238014   72639 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.942706631s)
	I1014 15:06:29.238096   72639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:06:29.258804   72639 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:06:29.269440   72639 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:06:29.279613   72639 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:06:29.279633   72639 kubeadm.go:157] found existing configuration files:
	
	I1014 15:06:29.279672   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:06:29.292840   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:06:29.292912   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:06:29.306987   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:06:29.319896   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:06:29.319970   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:06:29.333974   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:06:29.343993   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:06:29.344051   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:06:29.354691   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:06:29.364354   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:06:29.364422   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:06:29.374674   72639 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 15:06:29.452845   72639 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1014 15:06:29.452961   72639 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 15:06:29.618263   72639 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 15:06:29.618446   72639 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 15:06:29.618582   72639 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1014 15:06:29.813387   72639 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 15:06:29.815501   72639 out.go:235]   - Generating certificates and keys ...
	I1014 15:06:29.815610   72639 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 15:06:29.815697   72639 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 15:06:29.815799   72639 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 15:06:29.815879   72639 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1014 15:06:29.815971   72639 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 15:06:29.816039   72639 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1014 15:06:29.816125   72639 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1014 15:06:29.816206   72639 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1014 15:06:29.816307   72639 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 15:06:29.816404   72639 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 15:06:29.816454   72639 kubeadm.go:310] [certs] Using the existing "sa" key
	I1014 15:06:29.816531   72639 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 15:06:29.944505   72639 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 15:06:30.106467   72639 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 15:06:30.226356   72639 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 15:06:30.322169   72639 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 15:06:30.342382   72639 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 15:06:30.343666   72639 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 15:06:30.343736   72639 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 15:06:30.507000   72639 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 15:06:30.509157   72639 out.go:235]   - Booting up control plane ...
	I1014 15:06:30.509293   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 15:06:30.518440   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 15:06:30.520572   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 15:06:30.522337   72639 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 15:06:30.524996   72639 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1014 15:07:10.525694   72639 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1014 15:07:10.526665   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:07:10.526908   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:07:15.527128   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:07:15.527376   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:07:25.527779   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:07:25.528060   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:07:45.528527   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:07:45.528768   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:08:25.530669   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:08:25.530970   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:08:25.530998   72639 kubeadm.go:310] 
	I1014 15:08:25.531059   72639 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1014 15:08:25.531114   72639 kubeadm.go:310] 		timed out waiting for the condition
	I1014 15:08:25.531125   72639 kubeadm.go:310] 
	I1014 15:08:25.531177   72639 kubeadm.go:310] 	This error is likely caused by:
	I1014 15:08:25.531238   72639 kubeadm.go:310] 		- The kubelet is not running
	I1014 15:08:25.531381   72639 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1014 15:08:25.531392   72639 kubeadm.go:310] 
	I1014 15:08:25.531527   72639 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1014 15:08:25.531587   72639 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1014 15:08:25.531633   72639 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1014 15:08:25.531643   72639 kubeadm.go:310] 
	I1014 15:08:25.531766   72639 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1014 15:08:25.531872   72639 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 15:08:25.531891   72639 kubeadm.go:310] 
	I1014 15:08:25.532038   72639 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1014 15:08:25.532174   72639 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 15:08:25.532281   72639 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1014 15:08:25.532377   72639 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1014 15:08:25.532418   72639 kubeadm.go:310] 
	I1014 15:08:25.532543   72639 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 15:08:25.532640   72639 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1014 15:08:25.532742   72639 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1014 15:08:25.532833   72639 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1014 15:08:25.532870   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 15:08:31.003635   72639 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.470741012s)
	I1014 15:08:31.003724   72639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:08:31.018666   72639 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:08:31.029707   72639 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:08:31.029729   72639 kubeadm.go:157] found existing configuration files:
	
	I1014 15:08:31.029776   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:08:31.039554   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:08:31.039625   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:08:31.049748   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:08:31.059618   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:08:31.059682   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:08:31.069369   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:08:31.078321   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:08:31.078385   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:08:31.088006   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:08:31.096681   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:08:31.096742   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:08:31.106269   72639 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 15:08:31.182768   72639 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1014 15:08:31.182833   72639 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 15:08:31.341660   72639 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 15:08:31.341833   72639 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 15:08:31.342008   72639 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1014 15:08:31.538731   72639 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 15:08:31.540933   72639 out.go:235]   - Generating certificates and keys ...
	I1014 15:08:31.541037   72639 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 15:08:31.541124   72639 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 15:08:31.541270   72639 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 15:08:31.541386   72639 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1014 15:08:31.541486   72639 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 15:08:31.541559   72639 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1014 15:08:31.541663   72639 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1014 15:08:31.541750   72639 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1014 15:08:31.542000   72639 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 15:08:31.542534   72639 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 15:08:31.542627   72639 kubeadm.go:310] [certs] Using the existing "sa" key
	I1014 15:08:31.542711   72639 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 15:08:31.847005   72639 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 15:08:32.049586   72639 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 15:08:32.355652   72639 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 15:08:32.511031   72639 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 15:08:32.526310   72639 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 15:08:32.526755   72639 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 15:08:32.526841   72639 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 15:08:32.665898   72639 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 15:08:32.667688   72639 out.go:235]   - Booting up control plane ...
	I1014 15:08:32.667806   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 15:08:32.681232   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 15:08:32.682929   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 15:08:32.683704   72639 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 15:08:32.685936   72639 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1014 15:09:12.687998   72639 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1014 15:09:12.688248   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:09:12.688517   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:09:17.689026   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:09:17.689213   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:09:27.689821   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:09:27.690119   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:09:47.690936   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:09:47.691185   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:10:27.691438   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:10:27.691721   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:10:27.691744   72639 kubeadm.go:310] 
	I1014 15:10:27.691779   72639 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1014 15:10:27.691847   72639 kubeadm.go:310] 		timed out waiting for the condition
	I1014 15:10:27.691867   72639 kubeadm.go:310] 
	I1014 15:10:27.691907   72639 kubeadm.go:310] 	This error is likely caused by:
	I1014 15:10:27.691972   72639 kubeadm.go:310] 		- The kubelet is not running
	I1014 15:10:27.692124   72639 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1014 15:10:27.692136   72639 kubeadm.go:310] 
	I1014 15:10:27.692253   72639 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1014 15:10:27.692311   72639 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1014 15:10:27.692352   72639 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1014 15:10:27.692363   72639 kubeadm.go:310] 
	I1014 15:10:27.692497   72639 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1014 15:10:27.692617   72639 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 15:10:27.692633   72639 kubeadm.go:310] 
	I1014 15:10:27.692787   72639 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1014 15:10:27.692915   72639 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 15:10:27.693051   72639 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1014 15:10:27.693146   72639 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1014 15:10:27.693158   72639 kubeadm.go:310] 
	I1014 15:10:27.693497   72639 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 15:10:27.693627   72639 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1014 15:10:27.693710   72639 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1014 15:10:27.693770   72639 kubeadm.go:394] duration metric: took 8m7.905137486s to StartCluster
	I1014 15:10:27.693808   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:10:27.693863   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:10:27.735373   72639 cri.go:89] found id: ""
	I1014 15:10:27.735410   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.735419   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:10:27.735425   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:10:27.735484   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:10:27.775691   72639 cri.go:89] found id: ""
	I1014 15:10:27.775713   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.775721   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:10:27.775727   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:10:27.775778   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:10:27.811621   72639 cri.go:89] found id: ""
	I1014 15:10:27.811645   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.811653   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:10:27.811658   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:10:27.811718   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:10:27.850894   72639 cri.go:89] found id: ""
	I1014 15:10:27.850917   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.850925   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:10:27.850931   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:10:27.850979   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:10:27.891559   72639 cri.go:89] found id: ""
	I1014 15:10:27.891596   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.891608   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:10:27.891616   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:10:27.891671   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:10:27.929896   72639 cri.go:89] found id: ""
	I1014 15:10:27.929929   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.929942   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:10:27.930002   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:10:27.930096   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:10:27.964801   72639 cri.go:89] found id: ""
	I1014 15:10:27.964828   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.964839   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:10:27.964845   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:10:27.964905   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:10:28.011737   72639 cri.go:89] found id: ""
	I1014 15:10:28.011761   72639 logs.go:282] 0 containers: []
	W1014 15:10:28.011769   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:10:28.011777   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:10:28.011788   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:10:28.088053   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:10:28.088082   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:10:28.088098   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:10:28.214495   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:10:28.214531   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:10:28.254766   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:10:28.254796   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:10:28.304942   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:10:28.304977   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1014 15:10:28.319674   72639 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1014 15:10:28.319729   72639 out.go:270] * 
	* 
	W1014 15:10:28.319783   72639 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 15:10:28.319802   72639 out.go:270] * 
	* 
	W1014 15:10:28.320716   72639 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 15:10:28.324551   72639 out.go:201] 
	W1014 15:10:28.325905   72639 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 15:10:28.325940   72639 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1014 15:10:28.325985   72639 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1014 15:10:28.327473   72639 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-399767 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-399767 -n old-k8s-version-399767
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-399767 -n old-k8s-version-399767: exit status 2 (238.705285ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-399767 logs -n 25
E1014 15:10:29.460170   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kindnet-517678/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-399767 logs -n 25: (1.574426865s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-517678 sudo cat                              | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-517678 sudo                                  | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-517678 sudo                                  | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-517678 sudo                                  | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-517678 sudo find                             | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-517678 sudo crio                             | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-517678                                       | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	| delete  | -p                                                     | disable-driver-mounts-887610 | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | disable-driver-mounts-887610                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:55 UTC |
	|         | default-k8s-diff-port-201291                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-813300             | no-preload-813300            | jenkins | v1.34.0 | 14 Oct 24 14:54 UTC | 14 Oct 24 14:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-813300                                   | no-preload-813300            | jenkins | v1.34.0 | 14 Oct 24 14:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-989166            | embed-certs-989166           | jenkins | v1.34.0 | 14 Oct 24 14:54 UTC | 14 Oct 24 14:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-989166                                  | embed-certs-989166           | jenkins | v1.34.0 | 14 Oct 24 14:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-201291  | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:55 UTC | 14 Oct 24 14:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:55 UTC |                     |
	|         | default-k8s-diff-port-201291                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-813300                  | no-preload-813300            | jenkins | v1.34.0 | 14 Oct 24 14:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-813300                                   | no-preload-813300            | jenkins | v1.34.0 | 14 Oct 24 14:56 UTC | 14 Oct 24 15:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-399767        | old-k8s-version-399767       | jenkins | v1.34.0 | 14 Oct 24 14:56 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-989166                 | embed-certs-989166           | jenkins | v1.34.0 | 14 Oct 24 14:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-989166                                  | embed-certs-989166           | jenkins | v1.34.0 | 14 Oct 24 14:57 UTC | 14 Oct 24 15:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-201291       | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:57 UTC | 14 Oct 24 15:06 UTC |
	|         | default-k8s-diff-port-201291                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-399767                              | old-k8s-version-399767       | jenkins | v1.34.0 | 14 Oct 24 14:58 UTC | 14 Oct 24 14:58 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-399767             | old-k8s-version-399767       | jenkins | v1.34.0 | 14 Oct 24 14:58 UTC | 14 Oct 24 14:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-399767                              | old-k8s-version-399767       | jenkins | v1.34.0 | 14 Oct 24 14:58 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 14:58:18
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 14:58:18.000027   72639 out.go:345] Setting OutFile to fd 1 ...
	I1014 14:58:18.000165   72639 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:58:18.000176   72639 out.go:358] Setting ErrFile to fd 2...
	I1014 14:58:18.000189   72639 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:58:18.000390   72639 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
	I1014 14:58:18.000911   72639 out.go:352] Setting JSON to false
	I1014 14:58:18.001828   72639 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6048,"bootTime":1728911850,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 14:58:18.001919   72639 start.go:139] virtualization: kvm guest
	I1014 14:58:18.004056   72639 out.go:177] * [old-k8s-version-399767] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1014 14:58:18.005382   72639 notify.go:220] Checking for updates...
	I1014 14:58:18.005437   72639 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 14:58:18.006939   72639 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 14:58:18.008275   72639 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 14:58:18.009565   72639 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 14:58:18.010773   72639 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 14:58:18.011941   72639 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 14:58:18.013472   72639 config.go:182] Loaded profile config "old-k8s-version-399767": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1014 14:58:18.013833   72639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:58:18.013892   72639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:58:18.028372   72639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44497
	I1014 14:58:18.028786   72639 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:58:18.029355   72639 main.go:141] libmachine: Using API Version  1
	I1014 14:58:18.029375   72639 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:58:18.029671   72639 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:58:18.029827   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 14:58:18.031644   72639 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1014 14:58:18.033229   72639 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 14:58:18.033524   72639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:58:18.033565   72639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:58:18.048210   72639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34273
	I1014 14:58:18.048620   72639 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:58:18.049080   72639 main.go:141] libmachine: Using API Version  1
	I1014 14:58:18.049102   72639 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:58:18.049377   72639 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:58:18.049550   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 14:58:18.084664   72639 out.go:177] * Using the kvm2 driver based on existing profile
	I1014 14:58:18.085942   72639 start.go:297] selected driver: kvm2
	I1014 14:58:18.085952   72639 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-399767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-399767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.138 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 14:58:18.086042   72639 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 14:58:18.086707   72639 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 14:58:18.086795   72639 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19790-7836/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 14:58:18.101802   72639 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1014 14:58:18.102194   72639 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 14:58:18.102224   72639 cni.go:84] Creating CNI manager for ""
	I1014 14:58:18.102263   72639 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 14:58:18.102315   72639 start.go:340] cluster config:
	{Name:old-k8s-version-399767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-399767 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.138 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 14:58:18.102441   72639 iso.go:125] acquiring lock: {Name:mk2e2e780b05ead4007a93e6b56d28c6081926bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 14:58:18.105418   72639 out.go:177] * Starting "old-k8s-version-399767" primary control-plane node in "old-k8s-version-399767" cluster
	I1014 14:58:16.182868   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:18.106656   72639 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1014 14:58:18.106696   72639 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1014 14:58:18.106708   72639 cache.go:56] Caching tarball of preloaded images
	I1014 14:58:18.106790   72639 preload.go:172] Found /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 14:58:18.106800   72639 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1014 14:58:18.106889   72639 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/config.json ...
	I1014 14:58:18.107063   72639 start.go:360] acquireMachinesLock for old-k8s-version-399767: {Name:mk0b928e3a7c606d1470b2064477a22c5e59969d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 14:58:22.262902   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:25.334877   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:31.414867   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:34.486863   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:40.566883   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:43.638929   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:49.718856   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:52.790946   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:58.870883   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:01.942844   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:08.022831   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:11.094893   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:17.174897   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:20.246818   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:26.326911   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:29.398852   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:35.478877   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:38.550829   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:44.630857   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:47.702856   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:53.782842   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:56.854890   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:02.934894   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:06.006879   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:12.086905   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:15.158856   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:21.238905   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:24.310889   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:30.390878   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:33.462909   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:39.542866   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:42.614929   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:48.694859   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:51.766865   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:57.846913   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:01:00.918859   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:01:06.998892   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:01:10.070810   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:01:13.075950   72173 start.go:364] duration metric: took 3m43.687804446s to acquireMachinesLock for "embed-certs-989166"
	I1014 15:01:13.076005   72173 start.go:96] Skipping create...Using existing machine configuration
	I1014 15:01:13.076011   72173 fix.go:54] fixHost starting: 
	I1014 15:01:13.076341   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:01:13.076386   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:01:13.092168   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41771
	I1014 15:01:13.092686   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:01:13.093180   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:01:13.093204   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:01:13.093560   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:01:13.093749   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:13.093889   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetState
	I1014 15:01:13.095639   72173 fix.go:112] recreateIfNeeded on embed-certs-989166: state=Stopped err=<nil>
	I1014 15:01:13.095665   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	W1014 15:01:13.095827   72173 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 15:01:13.097909   72173 out.go:177] * Restarting existing kvm2 VM for "embed-certs-989166" ...
	I1014 15:01:13.099253   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Start
	I1014 15:01:13.099433   72173 main.go:141] libmachine: (embed-certs-989166) Ensuring networks are active...
	I1014 15:01:13.100328   72173 main.go:141] libmachine: (embed-certs-989166) Ensuring network default is active
	I1014 15:01:13.100683   72173 main.go:141] libmachine: (embed-certs-989166) Ensuring network mk-embed-certs-989166 is active
	I1014 15:01:13.101062   72173 main.go:141] libmachine: (embed-certs-989166) Getting domain xml...
	I1014 15:01:13.101867   72173 main.go:141] libmachine: (embed-certs-989166) Creating domain...
	I1014 15:01:13.073323   71679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 15:01:13.073356   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetMachineName
	I1014 15:01:13.073658   71679 buildroot.go:166] provisioning hostname "no-preload-813300"
	I1014 15:01:13.073682   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetMachineName
	I1014 15:01:13.073854   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:01:13.075825   71679 machine.go:96] duration metric: took 4m37.425006s to provisionDockerMachine
	I1014 15:01:13.075866   71679 fix.go:56] duration metric: took 4m37.446829923s for fixHost
	I1014 15:01:13.075872   71679 start.go:83] releasing machines lock for "no-preload-813300", held for 4m37.446848059s
	W1014 15:01:13.075889   71679 start.go:714] error starting host: provision: host is not running
	W1014 15:01:13.075983   71679 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1014 15:01:13.075992   71679 start.go:729] Will try again in 5 seconds ...
	I1014 15:01:14.319338   72173 main.go:141] libmachine: (embed-certs-989166) Waiting to get IP...
	I1014 15:01:14.320167   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:14.320582   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:14.320651   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:14.320577   73268 retry.go:31] will retry after 213.073722ms: waiting for machine to come up
	I1014 15:01:14.534913   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:14.535353   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:14.535375   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:14.535306   73268 retry.go:31] will retry after 316.205029ms: waiting for machine to come up
	I1014 15:01:14.852769   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:14.853201   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:14.853261   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:14.853201   73268 retry.go:31] will retry after 399.414867ms: waiting for machine to come up
	I1014 15:01:15.253657   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:15.253955   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:15.253979   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:15.253917   73268 retry.go:31] will retry after 537.097034ms: waiting for machine to come up
	I1014 15:01:15.792362   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:15.792736   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:15.792763   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:15.792703   73268 retry.go:31] will retry after 594.582114ms: waiting for machine to come up
	I1014 15:01:16.388419   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:16.388838   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:16.388869   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:16.388793   73268 retry.go:31] will retry after 814.814512ms: waiting for machine to come up
	I1014 15:01:17.204791   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:17.205229   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:17.205255   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:17.205176   73268 retry.go:31] will retry after 971.673961ms: waiting for machine to come up
	I1014 15:01:18.178701   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:18.179100   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:18.179130   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:18.179048   73268 retry.go:31] will retry after 941.576822ms: waiting for machine to come up
	I1014 15:01:19.122097   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:19.122488   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:19.122514   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:19.122453   73268 retry.go:31] will retry after 1.5308999s: waiting for machine to come up
	I1014 15:01:18.077601   71679 start.go:360] acquireMachinesLock for no-preload-813300: {Name:mk0b928e3a7c606d1470b2064477a22c5e59969d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 15:01:20.655098   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:20.655524   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:20.655550   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:20.655475   73268 retry.go:31] will retry after 1.590510545s: waiting for machine to come up
	I1014 15:01:22.248128   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:22.248551   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:22.248572   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:22.248511   73268 retry.go:31] will retry after 1.965898839s: waiting for machine to come up
	I1014 15:01:24.215742   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:24.216187   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:24.216240   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:24.216136   73268 retry.go:31] will retry after 3.476459931s: waiting for machine to come up
	I1014 15:01:27.696804   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:27.697201   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:27.697254   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:27.697175   73268 retry.go:31] will retry after 3.212757582s: waiting for machine to come up
	I1014 15:01:32.235659   72390 start.go:364] duration metric: took 3m35.715993521s to acquireMachinesLock for "default-k8s-diff-port-201291"
	I1014 15:01:32.235710   72390 start.go:96] Skipping create...Using existing machine configuration
	I1014 15:01:32.235718   72390 fix.go:54] fixHost starting: 
	I1014 15:01:32.236084   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:01:32.236134   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:01:32.253294   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46045
	I1014 15:01:32.253760   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:01:32.254255   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:01:32.254275   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:01:32.254616   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:01:32.254797   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:32.254973   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetState
	I1014 15:01:32.256494   72390 fix.go:112] recreateIfNeeded on default-k8s-diff-port-201291: state=Stopped err=<nil>
	I1014 15:01:32.256523   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	W1014 15:01:32.256683   72390 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 15:01:32.258989   72390 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-201291" ...
	I1014 15:01:30.911781   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:30.912283   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has current primary IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:30.912313   72173 main.go:141] libmachine: (embed-certs-989166) Found IP for machine: 192.168.39.41
	I1014 15:01:30.912331   72173 main.go:141] libmachine: (embed-certs-989166) Reserving static IP address...
	I1014 15:01:30.912771   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "embed-certs-989166", mac: "52:54:00:ee:96:19", ip: "192.168.39.41"} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:30.912799   72173 main.go:141] libmachine: (embed-certs-989166) DBG | skip adding static IP to network mk-embed-certs-989166 - found existing host DHCP lease matching {name: "embed-certs-989166", mac: "52:54:00:ee:96:19", ip: "192.168.39.41"}
	I1014 15:01:30.912806   72173 main.go:141] libmachine: (embed-certs-989166) Reserved static IP address: 192.168.39.41
	I1014 15:01:30.912815   72173 main.go:141] libmachine: (embed-certs-989166) Waiting for SSH to be available...
	I1014 15:01:30.912822   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Getting to WaitForSSH function...
	I1014 15:01:30.914919   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:30.915273   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:30.915310   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:30.915392   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Using SSH client type: external
	I1014 15:01:30.915414   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa (-rw-------)
	I1014 15:01:30.915465   72173 main.go:141] libmachine: (embed-certs-989166) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.41 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 15:01:30.915489   72173 main.go:141] libmachine: (embed-certs-989166) DBG | About to run SSH command:
	I1014 15:01:30.915503   72173 main.go:141] libmachine: (embed-certs-989166) DBG | exit 0
	I1014 15:01:31.042620   72173 main.go:141] libmachine: (embed-certs-989166) DBG | SSH cmd err, output: <nil>: 
	I1014 15:01:31.043061   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetConfigRaw
	I1014 15:01:31.043675   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetIP
	I1014 15:01:31.046338   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.046679   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.046720   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.046941   72173 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/config.json ...
	I1014 15:01:31.047132   72173 machine.go:93] provisionDockerMachine start ...
	I1014 15:01:31.047149   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:31.047348   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.049453   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.049835   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.049857   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.049978   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:31.050147   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.050282   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.050419   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:31.050573   72173 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:31.050814   72173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1014 15:01:31.050828   72173 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 15:01:31.163270   72173 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 15:01:31.163306   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetMachineName
	I1014 15:01:31.163614   72173 buildroot.go:166] provisioning hostname "embed-certs-989166"
	I1014 15:01:31.163644   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetMachineName
	I1014 15:01:31.163821   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.166684   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.167009   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.167040   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.167157   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:31.167416   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.167582   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.167718   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:31.167857   72173 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:31.168025   72173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1014 15:01:31.168040   72173 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-989166 && echo "embed-certs-989166" | sudo tee /etc/hostname
	I1014 15:01:31.292369   72173 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-989166
	
	I1014 15:01:31.292405   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.295057   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.295425   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.295449   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.295713   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:31.295915   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.296076   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.296220   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:31.296395   72173 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:31.296552   72173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1014 15:01:31.296567   72173 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-989166' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-989166/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-989166' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 15:01:31.411080   72173 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 15:01:31.411112   72173 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 15:01:31.411131   72173 buildroot.go:174] setting up certificates
	I1014 15:01:31.411142   72173 provision.go:84] configureAuth start
	I1014 15:01:31.411150   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetMachineName
	I1014 15:01:31.411396   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetIP
	I1014 15:01:31.413972   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.414319   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.414341   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.414502   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.416775   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.417092   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.417113   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.417278   72173 provision.go:143] copyHostCerts
	I1014 15:01:31.417340   72173 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 15:01:31.417353   72173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 15:01:31.417437   72173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 15:01:31.417549   72173 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 15:01:31.417559   72173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 15:01:31.417600   72173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 15:01:31.417677   72173 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 15:01:31.417687   72173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 15:01:31.417721   72173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 15:01:31.417788   72173 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.embed-certs-989166 san=[127.0.0.1 192.168.39.41 embed-certs-989166 localhost minikube]
	I1014 15:01:31.599973   72173 provision.go:177] copyRemoteCerts
	I1014 15:01:31.600034   72173 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 15:01:31.600060   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.602964   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.603270   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.603296   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.603502   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:31.603665   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.603821   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:31.603949   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:01:31.688890   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 15:01:31.713474   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1014 15:01:31.737692   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 15:01:31.760955   72173 provision.go:87] duration metric: took 349.799595ms to configureAuth
	I1014 15:01:31.760986   72173 buildroot.go:189] setting minikube options for container-runtime
	I1014 15:01:31.761172   72173 config.go:182] Loaded profile config "embed-certs-989166": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 15:01:31.761244   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.763800   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.764149   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.764181   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.764339   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:31.764494   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.764636   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.764732   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:31.764852   72173 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:31.765002   72173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1014 15:01:31.765016   72173 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 15:01:31.992783   72173 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 15:01:31.992811   72173 machine.go:96] duration metric: took 945.667058ms to provisionDockerMachine
	I1014 15:01:31.992823   72173 start.go:293] postStartSetup for "embed-certs-989166" (driver="kvm2")
	I1014 15:01:31.992834   72173 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 15:01:31.992848   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:31.993203   72173 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 15:01:31.993235   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.995966   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.996380   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.996418   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.996538   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:31.996714   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.996864   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:31.997003   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:01:32.081931   72173 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 15:01:32.086191   72173 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 15:01:32.086218   72173 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 15:01:32.086287   72173 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 15:01:32.086368   72173 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 15:01:32.086455   72173 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 15:01:32.096414   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:01:32.120348   72173 start.go:296] duration metric: took 127.509685ms for postStartSetup
	I1014 15:01:32.120392   72173 fix.go:56] duration metric: took 19.044380323s for fixHost
	I1014 15:01:32.120412   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:32.123024   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.123435   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:32.123465   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.123649   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:32.123832   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:32.123986   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:32.124152   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:32.124288   72173 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:32.124487   72173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1014 15:01:32.124502   72173 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 15:01:32.235487   72173 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728918092.208431219
	
	I1014 15:01:32.235513   72173 fix.go:216] guest clock: 1728918092.208431219
	I1014 15:01:32.235522   72173 fix.go:229] Guest: 2024-10-14 15:01:32.208431219 +0000 UTC Remote: 2024-10-14 15:01:32.12039587 +0000 UTC m=+242.874215269 (delta=88.035349ms)
	I1014 15:01:32.235559   72173 fix.go:200] guest clock delta is within tolerance: 88.035349ms
	I1014 15:01:32.235572   72173 start.go:83] releasing machines lock for "embed-certs-989166", held for 19.159587089s
	I1014 15:01:32.235600   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:32.235877   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetIP
	I1014 15:01:32.238642   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.238995   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:32.239025   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.239175   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:32.239719   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:32.239891   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:32.239978   72173 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 15:01:32.240031   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:32.240091   72173 ssh_runner.go:195] Run: cat /version.json
	I1014 15:01:32.240115   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:32.242742   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.243102   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:32.243128   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.243177   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.243275   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:32.243482   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:32.243653   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:32.243664   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:32.243676   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.243811   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:32.243822   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:01:32.243929   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:32.244050   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:32.244168   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:01:32.357542   72173 ssh_runner.go:195] Run: systemctl --version
	I1014 15:01:32.365113   72173 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 15:01:32.510557   72173 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 15:01:32.516545   72173 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 15:01:32.516628   72173 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 15:01:32.533449   72173 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 15:01:32.533473   72173 start.go:495] detecting cgroup driver to use...
	I1014 15:01:32.533549   72173 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 15:01:32.549503   72173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 15:01:32.563126   72173 docker.go:217] disabling cri-docker service (if available) ...
	I1014 15:01:32.563184   72173 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 15:01:32.576972   72173 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 15:01:32.591047   72173 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 15:01:32.704839   72173 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 15:01:32.844770   72173 docker.go:233] disabling docker service ...
	I1014 15:01:32.844855   72173 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 15:01:32.859524   72173 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 15:01:32.872297   72173 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 15:01:33.014291   72173 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 15:01:33.136889   72173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 15:01:33.151656   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 15:01:33.170504   72173 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 15:01:33.170575   72173 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.180894   72173 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 15:01:33.180968   72173 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.192268   72173 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.203509   72173 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.215958   72173 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 15:01:33.227981   72173 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.241615   72173 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.261168   72173 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.273098   72173 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 15:01:33.284050   72173 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 15:01:33.284225   72173 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 15:01:33.299547   72173 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 15:01:33.310259   72173 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:01:33.426563   72173 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 15:01:33.526759   72173 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 15:01:33.526817   72173 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 15:01:33.532297   72173 start.go:563] Will wait 60s for crictl version
	I1014 15:01:33.532356   72173 ssh_runner.go:195] Run: which crictl
	I1014 15:01:33.536385   72173 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 15:01:33.576222   72173 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 15:01:33.576305   72173 ssh_runner.go:195] Run: crio --version
	I1014 15:01:33.604603   72173 ssh_runner.go:195] Run: crio --version
	I1014 15:01:33.636261   72173 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1014 15:01:33.637497   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetIP
	I1014 15:01:33.640450   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:33.640768   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:33.640806   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:33.641001   72173 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1014 15:01:33.645241   72173 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:01:33.658028   72173 kubeadm.go:883] updating cluster {Name:embed-certs-989166 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-989166 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 15:01:33.658181   72173 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 15:01:33.658246   72173 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:01:33.695188   72173 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1014 15:01:33.695261   72173 ssh_runner.go:195] Run: which lz4
	I1014 15:01:33.699735   72173 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 15:01:33.704540   72173 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 15:01:33.704576   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1014 15:01:32.260401   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Start
	I1014 15:01:32.260569   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Ensuring networks are active...
	I1014 15:01:32.261176   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Ensuring network default is active
	I1014 15:01:32.261498   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Ensuring network mk-default-k8s-diff-port-201291 is active
	I1014 15:01:32.261795   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Getting domain xml...
	I1014 15:01:32.262414   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Creating domain...
	I1014 15:01:33.520115   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting to get IP...
	I1014 15:01:33.521127   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:33.521518   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:33.521609   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:33.521520   73405 retry.go:31] will retry after 278.409801ms: waiting for machine to come up
	I1014 15:01:33.802289   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:33.802720   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:33.802744   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:33.802688   73405 retry.go:31] will retry after 362.923826ms: waiting for machine to come up
	I1014 15:01:34.167836   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:34.168228   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:34.168273   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:34.168163   73405 retry.go:31] will retry after 315.156371ms: waiting for machine to come up
	I1014 15:01:34.485445   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:34.485855   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:34.485876   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:34.485840   73405 retry.go:31] will retry after 573.46626ms: waiting for machine to come up
	I1014 15:01:35.061472   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:35.061997   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:35.062027   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:35.061965   73405 retry.go:31] will retry after 519.420022ms: waiting for machine to come up
	I1014 15:01:35.582694   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:35.583130   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:35.583155   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:35.583062   73405 retry.go:31] will retry after 661.055324ms: waiting for machine to come up
	I1014 15:01:36.245525   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:36.245876   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:36.245902   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:36.245834   73405 retry.go:31] will retry after 870.411428ms: waiting for machine to come up
	I1014 15:01:35.120269   72173 crio.go:462] duration metric: took 1.42058504s to copy over tarball
	I1014 15:01:35.120372   72173 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 15:01:37.206126   72173 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.08572724s)
	I1014 15:01:37.206168   72173 crio.go:469] duration metric: took 2.085859852s to extract the tarball
	I1014 15:01:37.206176   72173 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 15:01:37.243007   72173 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:01:37.289639   72173 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 15:01:37.289667   72173 cache_images.go:84] Images are preloaded, skipping loading
	I1014 15:01:37.289678   72173 kubeadm.go:934] updating node { 192.168.39.41 8443 v1.31.1 crio true true} ...
	I1014 15:01:37.289793   72173 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-989166 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.41
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-989166 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 15:01:37.289878   72173 ssh_runner.go:195] Run: crio config
	I1014 15:01:37.348641   72173 cni.go:84] Creating CNI manager for ""
	I1014 15:01:37.348665   72173 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:01:37.348684   72173 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 15:01:37.348711   72173 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.41 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-989166 NodeName:embed-certs-989166 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.41"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.41 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 15:01:37.348861   72173 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.41
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-989166"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.41"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.41"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 15:01:37.348925   72173 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 15:01:37.359204   72173 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 15:01:37.359272   72173 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 15:01:37.368810   72173 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1014 15:01:37.385402   72173 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 15:01:37.401828   72173 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1014 15:01:37.418811   72173 ssh_runner.go:195] Run: grep 192.168.39.41	control-plane.minikube.internal$ /etc/hosts
	I1014 15:01:37.422655   72173 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.41	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:01:37.434567   72173 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:01:37.561408   72173 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:01:37.579549   72173 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166 for IP: 192.168.39.41
	I1014 15:01:37.579577   72173 certs.go:194] generating shared ca certs ...
	I1014 15:01:37.579596   72173 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:01:37.579766   72173 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 15:01:37.579878   72173 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 15:01:37.579894   72173 certs.go:256] generating profile certs ...
	I1014 15:01:37.579998   72173 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/client.key
	I1014 15:01:37.580079   72173 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/apiserver.key.8939f8c2
	I1014 15:01:37.580148   72173 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/proxy-client.key
	I1014 15:01:37.580316   72173 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 15:01:37.580364   72173 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 15:01:37.580376   72173 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 15:01:37.580413   72173 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 15:01:37.580445   72173 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 15:01:37.580482   72173 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 15:01:37.580536   72173 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:01:37.581259   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 15:01:37.632130   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 15:01:37.678608   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 15:01:37.705377   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 15:01:37.731897   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1014 15:01:37.775043   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 15:01:37.801653   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 15:01:37.826547   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 15:01:37.852086   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 15:01:37.878715   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 15:01:37.905883   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 15:01:37.932458   72173 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 15:01:37.951362   72173 ssh_runner.go:195] Run: openssl version
	I1014 15:01:37.957730   72173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 15:01:37.969936   72173 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:01:37.974871   72173 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:01:37.974931   72173 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:01:37.981060   72173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 15:01:37.992086   72173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 15:01:38.003528   72173 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 15:01:38.008267   72173 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 15:01:38.008332   72173 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 15:01:38.014243   72173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 15:01:38.025272   72173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 15:01:38.036191   72173 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 15:01:38.040751   72173 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 15:01:38.040804   72173 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 15:01:38.046605   72173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 15:01:38.057815   72173 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 15:01:38.062497   72173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 15:01:38.068889   72173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 15:01:38.075278   72173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 15:01:38.081663   72173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 15:01:38.087892   72173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 15:01:38.093748   72173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 15:01:38.099807   72173 kubeadm.go:392] StartCluster: {Name:embed-certs-989166 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-989166 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 15:01:38.099912   72173 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 15:01:38.099960   72173 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:01:38.140896   72173 cri.go:89] found id: ""
	I1014 15:01:38.140973   72173 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 15:01:38.151443   72173 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1014 15:01:38.151462   72173 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1014 15:01:38.151512   72173 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 15:01:38.161419   72173 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 15:01:38.162357   72173 kubeconfig.go:125] found "embed-certs-989166" server: "https://192.168.39.41:8443"
	I1014 15:01:38.164328   72173 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 15:01:38.174731   72173 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.41
	I1014 15:01:38.174767   72173 kubeadm.go:1160] stopping kube-system containers ...
	I1014 15:01:38.174782   72173 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1014 15:01:38.174849   72173 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:01:38.214903   72173 cri.go:89] found id: ""
	I1014 15:01:38.214982   72173 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1014 15:01:38.232891   72173 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:01:38.242711   72173 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:01:38.242735   72173 kubeadm.go:157] found existing configuration files:
	
	I1014 15:01:38.242793   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:01:38.251939   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:01:38.252019   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:01:38.262108   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:01:38.271688   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:01:38.271751   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:01:38.281420   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:01:38.290693   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:01:38.290752   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:01:38.300205   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:01:38.309174   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:01:38.309236   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:01:38.318616   72173 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:01:38.328337   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:38.436297   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:37.118307   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:37.118744   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:37.118784   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:37.118706   73405 retry.go:31] will retry after 1.481454557s: waiting for machine to come up
	I1014 15:01:38.601780   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:38.602168   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:38.602212   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:38.602118   73405 retry.go:31] will retry after 1.22705177s: waiting for machine to come up
	I1014 15:01:39.831413   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:39.831889   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:39.831963   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:39.831838   73405 retry.go:31] will retry after 1.898722681s: waiting for machine to come up
	I1014 15:01:39.574410   72173 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.138075676s)
	I1014 15:01:39.574444   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:39.789417   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:39.873563   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:40.011579   72173 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:01:40.011673   72173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:01:40.511877   72173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:01:41.012608   72173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:01:41.512235   72173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:01:42.012435   72173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:01:42.047878   72173 api_server.go:72] duration metric: took 2.036298602s to wait for apiserver process to appear ...
	I1014 15:01:42.047909   72173 api_server.go:88] waiting for apiserver healthz status ...
	I1014 15:01:42.047935   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:01:44.298692   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 15:01:44.298726   72173 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 15:01:44.298743   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:01:44.317315   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 15:01:44.317353   72173 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 15:01:44.548651   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:01:44.559477   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:01:44.559513   72173 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:01:45.048060   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:01:45.057070   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:01:45.057099   72173 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:01:45.548344   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:01:45.552611   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:01:45.552640   72173 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:01:46.048314   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:01:46.054943   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 200:
	ok
	I1014 15:01:46.062740   72173 api_server.go:141] control plane version: v1.31.1
	I1014 15:01:46.062769   72173 api_server.go:131] duration metric: took 4.014851988s to wait for apiserver health ...
	I1014 15:01:46.062779   72173 cni.go:84] Creating CNI manager for ""
	I1014 15:01:46.062785   72173 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:01:46.064824   72173 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 15:01:41.731928   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:41.732483   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:41.732515   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:41.732435   73405 retry.go:31] will retry after 2.349662063s: waiting for machine to come up
	I1014 15:01:44.083975   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:44.084492   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:44.084523   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:44.084437   73405 retry.go:31] will retry after 3.472214726s: waiting for machine to come up
	I1014 15:01:46.066505   72173 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 15:01:46.092975   72173 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 15:01:46.123873   72173 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 15:01:46.142575   72173 system_pods.go:59] 8 kube-system pods found
	I1014 15:01:46.142636   72173 system_pods.go:61] "coredns-7c65d6cfc9-r8x9s" [5a00095c-8777-412a-a7af-319a03d6153e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 15:01:46.142647   72173 system_pods.go:61] "etcd-embed-certs-989166" [981d2f54-f128-4527-a7cb-a6b9c647740b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 15:01:46.142658   72173 system_pods.go:61] "kube-apiserver-embed-certs-989166" [31780b5a-6ebf-4c75-bd27-64a95193827f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 15:01:46.142668   72173 system_pods.go:61] "kube-controller-manager-embed-certs-989166" [345e7656-579a-4be9-bcf0-4117880a2988] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 15:01:46.142678   72173 system_pods.go:61] "kube-proxy-7p84k" [5d8243a8-7247-490f-9102-61008a614a67] Running
	I1014 15:01:46.142685   72173 system_pods.go:61] "kube-scheduler-embed-certs-989166" [53b4b4a4-74ec-485e-99e3-b53c2edc80ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 15:01:46.142695   72173 system_pods.go:61] "metrics-server-6867b74b74-zc8zh" [5abf22c7-d271-4c3a-8e0e-cd867142cee1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:01:46.142703   72173 system_pods.go:61] "storage-provisioner" [6860efa4-c72f-477f-b9e1-e90ddcd112b5] Running
	I1014 15:01:46.142711   72173 system_pods.go:74] duration metric: took 18.811157ms to wait for pod list to return data ...
	I1014 15:01:46.142722   72173 node_conditions.go:102] verifying NodePressure condition ...
	I1014 15:01:46.154420   72173 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 15:01:46.154449   72173 node_conditions.go:123] node cpu capacity is 2
	I1014 15:01:46.154463   72173 node_conditions.go:105] duration metric: took 11.735142ms to run NodePressure ...
	I1014 15:01:46.154483   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:46.417106   72173 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1014 15:01:46.422102   72173 kubeadm.go:739] kubelet initialised
	I1014 15:01:46.422127   72173 kubeadm.go:740] duration metric: took 4.991248ms waiting for restarted kubelet to initialise ...
	I1014 15:01:46.422135   72173 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:01:46.428014   72173 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-r8x9s" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:46.432946   72173 pod_ready.go:98] node "embed-certs-989166" hosting pod "coredns-7c65d6cfc9-r8x9s" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.432965   72173 pod_ready.go:82] duration metric: took 4.927935ms for pod "coredns-7c65d6cfc9-r8x9s" in "kube-system" namespace to be "Ready" ...
	E1014 15:01:46.432972   72173 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-989166" hosting pod "coredns-7c65d6cfc9-r8x9s" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.432979   72173 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:46.441849   72173 pod_ready.go:98] node "embed-certs-989166" hosting pod "etcd-embed-certs-989166" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.441868   72173 pod_ready.go:82] duration metric: took 8.882863ms for pod "etcd-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	E1014 15:01:46.441877   72173 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-989166" hosting pod "etcd-embed-certs-989166" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.441883   72173 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:46.446863   72173 pod_ready.go:98] node "embed-certs-989166" hosting pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.446891   72173 pod_ready.go:82] duration metric: took 4.997658ms for pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	E1014 15:01:46.446912   72173 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-989166" hosting pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.446922   72173 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:46.526949   72173 pod_ready.go:98] node "embed-certs-989166" hosting pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.526972   72173 pod_ready.go:82] duration metric: took 80.035898ms for pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	E1014 15:01:46.526981   72173 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-989166" hosting pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.526987   72173 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-7p84k" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:46.927217   72173 pod_ready.go:93] pod "kube-proxy-7p84k" in "kube-system" namespace has status "Ready":"True"
	I1014 15:01:46.927249   72173 pod_ready.go:82] duration metric: took 400.252417ms for pod "kube-proxy-7p84k" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:46.927263   72173 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:48.933034   72173 pod_ready.go:103] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"False"
	I1014 15:01:47.558671   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:47.559112   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:47.559143   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:47.559067   73405 retry.go:31] will retry after 3.421253013s: waiting for machine to come up
	I1014 15:01:50.981602   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:50.982143   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has current primary IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:50.982167   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Found IP for machine: 192.168.50.128
	I1014 15:01:50.982186   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Reserving static IP address...
	I1014 15:01:50.982682   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-201291", mac: "52:54:00:23:03:c4", ip: "192.168.50.128"} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:50.982703   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Reserved static IP address: 192.168.50.128
	I1014 15:01:50.982722   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | skip adding static IP to network mk-default-k8s-diff-port-201291 - found existing host DHCP lease matching {name: "default-k8s-diff-port-201291", mac: "52:54:00:23:03:c4", ip: "192.168.50.128"}
	I1014 15:01:50.982743   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Getting to WaitForSSH function...
	I1014 15:01:50.982781   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for SSH to be available...
	I1014 15:01:50.985084   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:50.985609   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:50.985640   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:50.985750   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Using SSH client type: external
	I1014 15:01:50.985778   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa (-rw-------)
	I1014 15:01:50.985814   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.128 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 15:01:50.985832   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | About to run SSH command:
	I1014 15:01:50.985849   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | exit 0
	I1014 15:01:51.123927   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | SSH cmd err, output: <nil>: 
	I1014 15:01:51.124457   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetConfigRaw
	I1014 15:01:51.125106   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetIP
	I1014 15:01:51.128286   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.128716   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.128770   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.129045   72390 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/config.json ...
	I1014 15:01:51.129283   72390 machine.go:93] provisionDockerMachine start ...
	I1014 15:01:51.129308   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:51.129551   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.131756   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.132164   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.132207   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.132488   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:51.132701   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.132873   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.133022   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:51.133181   72390 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:51.133421   72390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I1014 15:01:51.133436   72390 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 15:01:51.244659   72390 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 15:01:51.244691   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetMachineName
	I1014 15:01:51.244923   72390 buildroot.go:166] provisioning hostname "default-k8s-diff-port-201291"
	I1014 15:01:51.244953   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetMachineName
	I1014 15:01:51.245149   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.248061   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.248429   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.248463   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.248521   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:51.248697   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.248887   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.249034   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:51.249227   72390 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:51.249448   72390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I1014 15:01:51.249463   72390 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-201291 && echo "default-k8s-diff-port-201291" | sudo tee /etc/hostname
	I1014 15:01:51.373260   72390 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-201291
	
	I1014 15:01:51.373293   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.376195   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.376528   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.376549   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.376752   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:51.376962   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.377159   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.377296   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:51.377446   72390 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:51.377657   72390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I1014 15:01:51.377676   72390 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-201291' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-201291/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-201291' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 15:01:52.179441   72639 start.go:364] duration metric: took 3m34.072351032s to acquireMachinesLock for "old-k8s-version-399767"
	I1014 15:01:52.179497   72639 start.go:96] Skipping create...Using existing machine configuration
	I1014 15:01:52.179505   72639 fix.go:54] fixHost starting: 
	I1014 15:01:52.179834   72639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:01:52.179873   72639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:01:52.196724   72639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39389
	I1014 15:01:52.197171   72639 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:01:52.197649   72639 main.go:141] libmachine: Using API Version  1
	I1014 15:01:52.197673   72639 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:01:52.198010   72639 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:01:52.198191   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:01:52.198337   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetState
	I1014 15:01:52.199789   72639 fix.go:112] recreateIfNeeded on old-k8s-version-399767: state=Stopped err=<nil>
	I1014 15:01:52.199826   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	W1014 15:01:52.199998   72639 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 15:01:52.202220   72639 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-399767" ...
	I1014 15:01:52.203601   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .Start
	I1014 15:01:52.203771   72639 main.go:141] libmachine: (old-k8s-version-399767) Ensuring networks are active...
	I1014 15:01:52.204575   72639 main.go:141] libmachine: (old-k8s-version-399767) Ensuring network default is active
	I1014 15:01:52.204971   72639 main.go:141] libmachine: (old-k8s-version-399767) Ensuring network mk-old-k8s-version-399767 is active
	I1014 15:01:52.205326   72639 main.go:141] libmachine: (old-k8s-version-399767) Getting domain xml...
	I1014 15:01:52.206026   72639 main.go:141] libmachine: (old-k8s-version-399767) Creating domain...
	I1014 15:01:51.488446   72390 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 15:01:51.488486   72390 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 15:01:51.488535   72390 buildroot.go:174] setting up certificates
	I1014 15:01:51.488553   72390 provision.go:84] configureAuth start
	I1014 15:01:51.488570   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetMachineName
	I1014 15:01:51.488867   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetIP
	I1014 15:01:51.491749   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.492141   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.492171   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.492351   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.494197   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.494498   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.494524   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.494693   72390 provision.go:143] copyHostCerts
	I1014 15:01:51.494745   72390 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 15:01:51.494764   72390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 15:01:51.494834   72390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 15:01:51.494945   72390 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 15:01:51.494958   72390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 15:01:51.494992   72390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 15:01:51.495081   72390 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 15:01:51.495095   72390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 15:01:51.495122   72390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 15:01:51.495214   72390 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-201291 san=[127.0.0.1 192.168.50.128 default-k8s-diff-port-201291 localhost minikube]
	I1014 15:01:51.567041   72390 provision.go:177] copyRemoteCerts
	I1014 15:01:51.567098   72390 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 15:01:51.567121   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.570006   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.570340   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.570368   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.570562   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:51.570769   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.570941   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:51.571047   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:01:51.652956   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 15:01:51.677959   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1014 15:01:51.702009   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 15:01:51.727016   72390 provision.go:87] duration metric: took 238.449189ms to configureAuth
	I1014 15:01:51.727043   72390 buildroot.go:189] setting minikube options for container-runtime
	I1014 15:01:51.727207   72390 config.go:182] Loaded profile config "default-k8s-diff-port-201291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 15:01:51.727276   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.729742   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.730043   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.730065   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.730242   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:51.730418   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.730578   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.730735   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:51.730891   72390 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:51.731097   72390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I1014 15:01:51.731114   72390 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 15:01:51.942847   72390 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 15:01:51.942874   72390 machine.go:96] duration metric: took 813.575194ms to provisionDockerMachine
	I1014 15:01:51.942888   72390 start.go:293] postStartSetup for "default-k8s-diff-port-201291" (driver="kvm2")
	I1014 15:01:51.942903   72390 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 15:01:51.942926   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:51.943250   72390 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 15:01:51.943283   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.946246   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.946608   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.946638   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.946799   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:51.946984   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.947165   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:51.947293   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:01:52.030124   72390 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 15:01:52.034493   72390 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 15:01:52.034525   72390 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 15:01:52.034625   72390 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 15:01:52.034740   72390 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 15:01:52.034834   72390 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 15:01:52.044919   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:01:52.068326   72390 start.go:296] duration metric: took 125.426221ms for postStartSetup
	I1014 15:01:52.068370   72390 fix.go:56] duration metric: took 19.832650283s for fixHost
	I1014 15:01:52.068394   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:52.070949   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.071362   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:52.071388   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.071588   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:52.071788   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:52.071908   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:52.072065   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:52.072231   72390 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:52.072449   72390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I1014 15:01:52.072468   72390 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 15:01:52.179264   72390 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728918112.149610573
	
	I1014 15:01:52.179291   72390 fix.go:216] guest clock: 1728918112.149610573
	I1014 15:01:52.179301   72390 fix.go:229] Guest: 2024-10-14 15:01:52.149610573 +0000 UTC Remote: 2024-10-14 15:01:52.06837553 +0000 UTC m=+235.685992564 (delta=81.235043ms)
	I1014 15:01:52.179349   72390 fix.go:200] guest clock delta is within tolerance: 81.235043ms
	I1014 15:01:52.179354   72390 start.go:83] releasing machines lock for "default-k8s-diff-port-201291", held for 19.943664398s
	I1014 15:01:52.179387   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:52.179666   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetIP
	I1014 15:01:52.182457   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.182834   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:52.182861   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.183000   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:52.183598   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:52.183784   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:52.183883   72390 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 15:01:52.183928   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:52.183993   72390 ssh_runner.go:195] Run: cat /version.json
	I1014 15:01:52.184017   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:52.186499   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.186692   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.186890   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:52.186915   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.187021   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:52.187050   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.187086   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:52.187288   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:52.187331   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:52.187479   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:52.187485   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:52.187597   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:01:52.187688   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:52.187843   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:01:52.264102   72390 ssh_runner.go:195] Run: systemctl --version
	I1014 15:01:52.291233   72390 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 15:01:52.443318   72390 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 15:01:52.450321   72390 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 15:01:52.450400   72390 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 15:01:52.467949   72390 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 15:01:52.467975   72390 start.go:495] detecting cgroup driver to use...
	I1014 15:01:52.468039   72390 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 15:01:52.485758   72390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 15:01:52.500662   72390 docker.go:217] disabling cri-docker service (if available) ...
	I1014 15:01:52.500729   72390 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 15:01:52.520846   72390 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 15:01:52.535606   72390 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 15:01:52.671062   72390 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 15:01:52.845631   72390 docker.go:233] disabling docker service ...
	I1014 15:01:52.845694   72390 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 15:01:52.867403   72390 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 15:01:52.882344   72390 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 15:01:53.020570   72390 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 15:01:53.157941   72390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 15:01:53.174989   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 15:01:53.195729   72390 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 15:01:53.195799   72390 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.207613   72390 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 15:01:53.207671   72390 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.218838   72390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.231186   72390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.247521   72390 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 15:01:53.258128   72390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.269119   72390 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.287810   72390 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.298576   72390 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 15:01:53.308114   72390 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 15:01:53.308169   72390 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 15:01:53.322207   72390 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 15:01:53.332284   72390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:01:53.483702   72390 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 15:01:53.581260   72390 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 15:01:53.581341   72390 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 15:01:53.586042   72390 start.go:563] Will wait 60s for crictl version
	I1014 15:01:53.586105   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:01:53.589931   72390 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 15:01:53.634776   72390 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 15:01:53.634864   72390 ssh_runner.go:195] Run: crio --version
	I1014 15:01:53.664242   72390 ssh_runner.go:195] Run: crio --version
	I1014 15:01:53.698374   72390 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1014 15:01:50.933590   72173 pod_ready.go:103] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"False"
	I1014 15:01:52.935445   72173 pod_ready.go:103] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"False"
	I1014 15:01:53.699730   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetIP
	I1014 15:01:53.702837   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:53.703224   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:53.703245   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:53.703528   72390 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1014 15:01:53.707720   72390 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:01:53.721953   72390 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-201291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-201291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.128 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 15:01:53.722106   72390 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 15:01:53.722165   72390 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:01:53.779083   72390 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1014 15:01:53.779139   72390 ssh_runner.go:195] Run: which lz4
	I1014 15:01:53.783197   72390 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 15:01:53.787515   72390 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 15:01:53.787549   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1014 15:01:55.277150   72390 crio.go:462] duration metric: took 1.493980352s to copy over tarball
	I1014 15:01:55.277212   72390 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 15:01:53.506315   72639 main.go:141] libmachine: (old-k8s-version-399767) Waiting to get IP...
	I1014 15:01:53.507576   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:53.508228   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:53.508297   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:53.508202   73581 retry.go:31] will retry after 220.59125ms: waiting for machine to come up
	I1014 15:01:53.730853   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:53.731286   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:53.731339   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:53.731257   73581 retry.go:31] will retry after 321.559387ms: waiting for machine to come up
	I1014 15:01:54.054891   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:54.055482   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:54.055509   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:54.055443   73581 retry.go:31] will retry after 444.912998ms: waiting for machine to come up
	I1014 15:01:54.502125   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:54.502479   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:54.502525   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:54.502462   73581 retry.go:31] will retry after 600.214254ms: waiting for machine to come up
	I1014 15:01:55.104962   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:55.105479   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:55.105504   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:55.105425   73581 retry.go:31] will retry after 686.77698ms: waiting for machine to come up
	I1014 15:01:55.794125   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:55.794825   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:55.794871   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:55.794717   73581 retry.go:31] will retry after 926.146146ms: waiting for machine to come up
	I1014 15:01:56.722712   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:56.723153   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:56.723183   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:56.723112   73581 retry.go:31] will retry after 1.108272037s: waiting for machine to come up
	I1014 15:01:57.832729   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:57.833304   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:57.833356   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:57.833279   73581 retry.go:31] will retry after 1.442737664s: waiting for machine to come up
	I1014 15:01:55.435691   72173 pod_ready.go:103] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"False"
	I1014 15:01:57.933561   72173 pod_ready.go:103] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"False"
	I1014 15:01:57.424526   72390 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.147277316s)
	I1014 15:01:57.424559   72390 crio.go:469] duration metric: took 2.147385522s to extract the tarball
	I1014 15:01:57.424566   72390 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 15:01:57.461792   72390 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:01:57.504424   72390 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 15:01:57.504450   72390 cache_images.go:84] Images are preloaded, skipping loading
	I1014 15:01:57.504460   72390 kubeadm.go:934] updating node { 192.168.50.128 8444 v1.31.1 crio true true} ...
	I1014 15:01:57.504656   72390 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-201291 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-201291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 15:01:57.504759   72390 ssh_runner.go:195] Run: crio config
	I1014 15:01:57.555431   72390 cni.go:84] Creating CNI manager for ""
	I1014 15:01:57.555453   72390 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:01:57.555462   72390 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 15:01:57.555482   72390 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.128 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-201291 NodeName:default-k8s-diff-port-201291 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.128"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.128 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 15:01:57.555593   72390 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.128
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-201291"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.128"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.128"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 15:01:57.555652   72390 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 15:01:57.565953   72390 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 15:01:57.566025   72390 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 15:01:57.576141   72390 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1014 15:01:57.594855   72390 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 15:01:57.611249   72390 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1014 15:01:57.628363   72390 ssh_runner.go:195] Run: grep 192.168.50.128	control-plane.minikube.internal$ /etc/hosts
	I1014 15:01:57.632552   72390 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.128	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:01:57.645588   72390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:01:57.769192   72390 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:01:57.787654   72390 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291 for IP: 192.168.50.128
	I1014 15:01:57.787677   72390 certs.go:194] generating shared ca certs ...
	I1014 15:01:57.787695   72390 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:01:57.787865   72390 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 15:01:57.787916   72390 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 15:01:57.787930   72390 certs.go:256] generating profile certs ...
	I1014 15:01:57.788084   72390 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/client.key
	I1014 15:01:57.788174   72390 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/apiserver.key.517dfce8
	I1014 15:01:57.788223   72390 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/proxy-client.key
	I1014 15:01:57.788371   72390 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 15:01:57.788407   72390 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 15:01:57.788417   72390 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 15:01:57.788439   72390 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 15:01:57.788460   72390 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 15:01:57.788482   72390 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 15:01:57.788521   72390 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:01:57.789141   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 15:01:57.821159   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 15:01:57.875530   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 15:01:57.902687   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 15:01:57.935658   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1014 15:01:57.961987   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 15:01:57.987107   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 15:01:58.013544   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 15:01:58.039793   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 15:01:58.071154   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 15:01:58.102574   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 15:01:58.127398   72390 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 15:01:58.144906   72390 ssh_runner.go:195] Run: openssl version
	I1014 15:01:58.150817   72390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 15:01:58.162122   72390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 15:01:58.167170   72390 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 15:01:58.167240   72390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 15:01:58.173692   72390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 15:01:58.185769   72390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 15:01:58.197045   72390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:01:58.201652   72390 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:01:58.201716   72390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:01:58.207559   72390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 15:01:58.218921   72390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 15:01:58.230822   72390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 15:01:58.235774   72390 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 15:01:58.235832   72390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 15:01:58.241546   72390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 15:01:58.252618   72390 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 15:01:58.257509   72390 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 15:01:58.263891   72390 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 15:01:58.270085   72390 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 15:01:58.276427   72390 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 15:01:58.282346   72390 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 15:01:58.288396   72390 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 15:01:58.294386   72390 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-201291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-201291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.128 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 15:01:58.294472   72390 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 15:01:58.294517   72390 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:01:58.342008   72390 cri.go:89] found id: ""
	I1014 15:01:58.342088   72390 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 15:01:58.352478   72390 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1014 15:01:58.352512   72390 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1014 15:01:58.352566   72390 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 15:01:58.363158   72390 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 15:01:58.364106   72390 kubeconfig.go:125] found "default-k8s-diff-port-201291" server: "https://192.168.50.128:8444"
	I1014 15:01:58.366079   72390 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 15:01:58.375635   72390 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.128
	I1014 15:01:58.375666   72390 kubeadm.go:1160] stopping kube-system containers ...
	I1014 15:01:58.375680   72390 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1014 15:01:58.375733   72390 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:01:58.411846   72390 cri.go:89] found id: ""
	I1014 15:01:58.411923   72390 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1014 15:01:58.428602   72390 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:01:58.439214   72390 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:01:58.439239   72390 kubeadm.go:157] found existing configuration files:
	
	I1014 15:01:58.439293   72390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1014 15:01:58.448475   72390 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:01:58.448528   72390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:01:58.457816   72390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1014 15:01:58.467279   72390 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:01:58.467352   72390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:01:58.477479   72390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1014 15:01:58.487899   72390 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:01:58.487968   72390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:01:58.498296   72390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1014 15:01:58.507910   72390 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:01:58.507977   72390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:01:58.517901   72390 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:01:58.527983   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:58.654226   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:59.576099   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:59.790552   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:59.879043   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:59.963369   72390 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:01:59.963462   72390 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:00.464403   72390 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:00.963891   72390 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:00.994849   72390 api_server.go:72] duration metric: took 1.031477803s to wait for apiserver process to appear ...
	I1014 15:02:00.994875   72390 api_server.go:88] waiting for apiserver healthz status ...
	I1014 15:02:00.994897   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:01:59.278031   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:59.278558   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:59.278586   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:59.278519   73581 retry.go:31] will retry after 1.187069828s: waiting for machine to come up
	I1014 15:02:00.467810   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:00.468237   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:02:00.468267   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:02:00.468195   73581 retry.go:31] will retry after 1.667312665s: waiting for machine to come up
	I1014 15:02:02.137067   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:02.137569   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:02:02.137590   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:02:02.137530   73581 retry.go:31] will retry after 1.910892221s: waiting for machine to come up
	I1014 15:01:59.994818   72173 pod_ready.go:103] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:00.130085   72173 pod_ready.go:93] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:00.130109   72173 pod_ready.go:82] duration metric: took 13.202838085s for pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:00.130121   72173 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:02.142821   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:03.649728   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 15:02:03.649764   72390 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 15:02:03.649780   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:02:03.754772   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:03.754805   72390 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:02:03.995106   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:02:04.020015   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:04.020040   72390 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:02:04.495270   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:02:04.501643   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:04.501694   72390 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:02:04.995049   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:02:05.002865   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:05.002893   72390 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:02:05.495412   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:02:05.499936   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 200:
	ok
	I1014 15:02:05.506656   72390 api_server.go:141] control plane version: v1.31.1
	I1014 15:02:05.506685   72390 api_server.go:131] duration metric: took 4.511803211s to wait for apiserver health ...
	I1014 15:02:05.506694   72390 cni.go:84] Creating CNI manager for ""
	I1014 15:02:05.506700   72390 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:02:05.508420   72390 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 15:02:05.509685   72390 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 15:02:05.521314   72390 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 15:02:05.543021   72390 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 15:02:05.553508   72390 system_pods.go:59] 8 kube-system pods found
	I1014 15:02:05.553539   72390 system_pods.go:61] "coredns-7c65d6cfc9-994hx" [b0291ce4-5503-4bb1-8e36-d956b115c3ac] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 15:02:05.553548   72390 system_pods.go:61] "etcd-default-k8s-diff-port-201291" [5e359915-fb2e-46d5-a1a8-826341943fc3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 15:02:05.553555   72390 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-201291" [047bd813-aaab-428e-ab47-12932195c91f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 15:02:05.553562   72390 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-201291" [6eb0eb91-21ce-4e56-9758-fbd453b0d4df] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 15:02:05.553567   72390 system_pods.go:61] "kube-proxy-rh82t" [1dcd3c39-1bfe-40ac-a012-ea17ea1dfb6d] Running
	I1014 15:02:05.553572   72390 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-201291" [aaeefd23-6adc-4c69-acca-38e3f3172b2e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 15:02:05.553577   72390 system_pods.go:61] "metrics-server-6867b74b74-bcrqs" [508697cd-cf31-4078-8985-5c0b77966695] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:02:05.553581   72390 system_pods.go:61] "storage-provisioner" [62925b5e-ec1d-4d5b-aa70-a4fc555db52d] Running
	I1014 15:02:05.553587   72390 system_pods.go:74] duration metric: took 10.544168ms to wait for pod list to return data ...
	I1014 15:02:05.553593   72390 node_conditions.go:102] verifying NodePressure condition ...
	I1014 15:02:05.558889   72390 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 15:02:05.558917   72390 node_conditions.go:123] node cpu capacity is 2
	I1014 15:02:05.558929   72390 node_conditions.go:105] duration metric: took 5.331009ms to run NodePressure ...
	I1014 15:02:05.558948   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:05.819037   72390 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1014 15:02:05.826431   72390 kubeadm.go:739] kubelet initialised
	I1014 15:02:05.826456   72390 kubeadm.go:740] duration metric: took 7.391664ms waiting for restarted kubelet to initialise ...
	I1014 15:02:05.826463   72390 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:02:05.833547   72390 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:05.840150   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.840175   72390 pod_ready.go:82] duration metric: took 6.599969ms for pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:05.840186   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.840205   72390 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:05.850319   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.850346   72390 pod_ready.go:82] duration metric: took 10.130163ms for pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:05.850359   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.850368   72390 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:05.857192   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.857215   72390 pod_ready.go:82] duration metric: took 6.838793ms for pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:05.857228   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.857237   72390 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:05.946611   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.946646   72390 pod_ready.go:82] duration metric: took 89.397304ms for pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:05.946663   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.946674   72390 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rh82t" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:06.346368   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "kube-proxy-rh82t" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:06.346400   72390 pod_ready.go:82] duration metric: took 399.71513ms for pod "kube-proxy-rh82t" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:06.346413   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "kube-proxy-rh82t" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:06.346423   72390 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:06.746899   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:06.746928   72390 pod_ready.go:82] duration metric: took 400.494872ms for pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:06.746941   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:06.746951   72390 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:07.146147   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:07.146175   72390 pod_ready.go:82] duration metric: took 399.215075ms for pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:07.146199   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:07.146215   72390 pod_ready.go:39] duration metric: took 1.319742206s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:02:07.146237   72390 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 15:02:07.158049   72390 ops.go:34] apiserver oom_adj: -16
	I1014 15:02:07.158072   72390 kubeadm.go:597] duration metric: took 8.805549392s to restartPrimaryControlPlane
	I1014 15:02:07.158082   72390 kubeadm.go:394] duration metric: took 8.863707122s to StartCluster
	I1014 15:02:07.158102   72390 settings.go:142] acquiring lock: {Name:mk9f6f6b9dc8c3435472077cbb9091b8e648d1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:02:07.158192   72390 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 15:02:07.159622   72390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/kubeconfig: {Name:mk17dd47b52fd7a8ee17f563c35b22ef1b7788f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:02:07.159917   72390 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.128 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 15:02:07.159968   72390 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 15:02:07.160052   72390 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-201291"
	I1014 15:02:07.160074   72390 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-201291"
	W1014 15:02:07.160086   72390 addons.go:243] addon storage-provisioner should already be in state true
	I1014 15:02:07.160125   72390 host.go:66] Checking if "default-k8s-diff-port-201291" exists ...
	I1014 15:02:07.160133   72390 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-201291"
	I1014 15:02:07.160166   72390 config.go:182] Loaded profile config "default-k8s-diff-port-201291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 15:02:07.160181   72390 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-201291"
	I1014 15:02:07.160179   72390 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-201291"
	I1014 15:02:07.160228   72390 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-201291"
	W1014 15:02:07.160251   72390 addons.go:243] addon metrics-server should already be in state true
	I1014 15:02:07.160312   72390 host.go:66] Checking if "default-k8s-diff-port-201291" exists ...
	I1014 15:02:07.160472   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.160508   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.160692   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.160712   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.160729   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.160770   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.161892   72390 out.go:177] * Verifying Kubernetes components...
	I1014 15:02:07.163368   72390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:02:07.176101   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36801
	I1014 15:02:07.176351   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44737
	I1014 15:02:07.176705   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.176834   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.177272   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.177298   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.177392   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.177413   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.177600   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43091
	I1014 15:02:07.177639   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.177703   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.178070   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.178181   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.178244   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.178252   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.178285   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.178566   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.178590   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.178944   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.179107   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetState
	I1014 15:02:07.181971   72390 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-201291"
	W1014 15:02:07.181989   72390 addons.go:243] addon default-storageclass should already be in state true
	I1014 15:02:07.182024   72390 host.go:66] Checking if "default-k8s-diff-port-201291" exists ...
	I1014 15:02:07.182278   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.182322   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.194707   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36383
	I1014 15:02:07.195401   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.196015   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.196043   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.196413   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.196511   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35479
	I1014 15:02:07.196618   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetState
	I1014 15:02:07.196977   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.197479   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.197497   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.197520   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41695
	I1014 15:02:07.197848   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.197981   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.198048   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetState
	I1014 15:02:07.198544   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.198567   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.198636   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:02:07.199017   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.199817   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.199824   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:02:07.199864   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.200860   72390 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:07.201674   72390 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1014 15:02:04.050521   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:04.051060   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:02:04.051099   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:02:04.051015   73581 retry.go:31] will retry after 2.29433775s: waiting for machine to come up
	I1014 15:02:06.347519   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:06.347985   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:02:06.348004   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:02:06.347945   73581 retry.go:31] will retry after 3.499922823s: waiting for machine to come up
	I1014 15:02:07.202461   72390 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 15:02:07.202476   72390 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 15:02:07.202491   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:02:07.203259   72390 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1014 15:02:07.203275   72390 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1014 15:02:07.203292   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:02:07.205760   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:02:07.206124   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:02:07.206150   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:02:07.206375   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:02:07.206533   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:02:07.206676   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:02:07.206729   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:02:07.206858   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:02:07.207134   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:02:07.207150   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:02:07.207248   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:02:07.207455   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:02:07.207559   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:02:07.207677   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:02:07.219554   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38833
	I1014 15:02:07.220070   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.220483   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.220508   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.220842   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.221004   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetState
	I1014 15:02:07.222706   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:02:07.222961   72390 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 15:02:07.222979   72390 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 15:02:07.222997   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:02:07.225715   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:02:07.226209   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:02:07.226250   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:02:07.226551   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:02:07.226964   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:02:07.227118   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:02:07.227254   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:02:07.362105   72390 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:02:07.384279   72390 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-201291" to be "Ready" ...
	I1014 15:02:07.438536   72390 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 15:02:07.551868   72390 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1014 15:02:07.551897   72390 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1014 15:02:07.606347   72390 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 15:02:07.656287   72390 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1014 15:02:07.656313   72390 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1014 15:02:07.687002   72390 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 15:02:07.687027   72390 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1014 15:02:07.751715   72390 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 15:02:07.810869   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:07.810902   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:07.811193   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Closing plugin on server side
	I1014 15:02:07.811247   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:07.811262   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:07.811273   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:07.811281   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:07.811546   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:07.811562   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:07.811576   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Closing plugin on server side
	I1014 15:02:07.819897   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:07.819917   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:07.820156   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:07.820206   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:07.820179   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Closing plugin on server side
	I1014 15:02:08.581553   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:08.581583   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:08.581902   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Closing plugin on server side
	I1014 15:02:08.581943   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:08.581955   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:08.581974   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:08.581986   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:08.582197   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:08.582211   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:08.595214   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:08.595242   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:08.595493   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Closing plugin on server side
	I1014 15:02:08.595569   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:08.595589   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:08.595609   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:08.595623   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:08.595833   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:08.595847   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:08.595864   72390 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-201291"
	I1014 15:02:08.597967   72390 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1014 15:02:04.638029   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:07.139428   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:11.248505   71679 start.go:364] duration metric: took 53.170862497s to acquireMachinesLock for "no-preload-813300"
	I1014 15:02:11.248567   71679 start.go:96] Skipping create...Using existing machine configuration
	I1014 15:02:11.248581   71679 fix.go:54] fixHost starting: 
	I1014 15:02:11.248978   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:11.249022   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:11.266270   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39251
	I1014 15:02:11.266780   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:11.267302   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:02:11.267319   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:11.267675   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:11.267842   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:11.267984   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetState
	I1014 15:02:11.269459   71679 fix.go:112] recreateIfNeeded on no-preload-813300: state=Stopped err=<nil>
	I1014 15:02:11.269484   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	W1014 15:02:11.269589   71679 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 15:02:11.271434   71679 out.go:177] * Restarting existing kvm2 VM for "no-preload-813300" ...
	I1014 15:02:08.599138   72390 addons.go:510] duration metric: took 1.439175047s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1014 15:02:09.388573   72390 node_ready.go:53] node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:09.851017   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.851562   72639 main.go:141] libmachine: (old-k8s-version-399767) Found IP for machine: 192.168.72.138
	I1014 15:02:09.851582   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has current primary IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.851587   72639 main.go:141] libmachine: (old-k8s-version-399767) Reserving static IP address...
	I1014 15:02:09.851961   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "old-k8s-version-399767", mac: "52:54:00:87:01:70", ip: "192.168.72.138"} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:09.851991   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | skip adding static IP to network mk-old-k8s-version-399767 - found existing host DHCP lease matching {name: "old-k8s-version-399767", mac: "52:54:00:87:01:70", ip: "192.168.72.138"}
	I1014 15:02:09.852009   72639 main.go:141] libmachine: (old-k8s-version-399767) Reserved static IP address: 192.168.72.138
	I1014 15:02:09.852021   72639 main.go:141] libmachine: (old-k8s-version-399767) Waiting for SSH to be available...
	I1014 15:02:09.852031   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | Getting to WaitForSSH function...
	I1014 15:02:09.854039   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.854351   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:09.854378   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.854493   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | Using SSH client type: external
	I1014 15:02:09.854517   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa (-rw-------)
	I1014 15:02:09.854547   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.138 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 15:02:09.854559   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | About to run SSH command:
	I1014 15:02:09.854572   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | exit 0
	I1014 15:02:09.979174   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | SSH cmd err, output: <nil>: 
	I1014 15:02:09.979594   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetConfigRaw
	I1014 15:02:09.980252   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetIP
	I1014 15:02:09.983038   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.983469   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:09.983502   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.983891   72639 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/config.json ...
	I1014 15:02:09.984191   72639 machine.go:93] provisionDockerMachine start ...
	I1014 15:02:09.984220   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:09.984487   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:09.986947   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.987361   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:09.987389   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.987514   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:09.987682   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:09.987830   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:09.987924   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:09.988076   72639 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:09.988338   72639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1014 15:02:09.988352   72639 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 15:02:10.098944   72639 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 15:02:10.098968   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetMachineName
	I1014 15:02:10.099242   72639 buildroot.go:166] provisioning hostname "old-k8s-version-399767"
	I1014 15:02:10.099268   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetMachineName
	I1014 15:02:10.099437   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:10.101961   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.102298   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.102320   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.102468   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:10.102670   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.102846   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.102980   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:10.103124   72639 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:10.103337   72639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1014 15:02:10.103353   72639 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-399767 && echo "old-k8s-version-399767" | sudo tee /etc/hostname
	I1014 15:02:10.226037   72639 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-399767
	
	I1014 15:02:10.226069   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:10.228712   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.229059   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.229082   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.229228   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:10.229408   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.229549   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.229670   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:10.229804   72639 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:10.230001   72639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1014 15:02:10.230018   72639 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-399767' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-399767/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-399767' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 15:02:10.344175   72639 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 15:02:10.344206   72639 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 15:02:10.344270   72639 buildroot.go:174] setting up certificates
	I1014 15:02:10.344284   72639 provision.go:84] configureAuth start
	I1014 15:02:10.344302   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetMachineName
	I1014 15:02:10.344632   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetIP
	I1014 15:02:10.347200   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.347587   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.347623   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.347812   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:10.349962   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.350332   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.350364   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.350502   72639 provision.go:143] copyHostCerts
	I1014 15:02:10.350558   72639 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 15:02:10.350574   72639 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 15:02:10.350646   72639 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 15:02:10.350734   72639 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 15:02:10.350742   72639 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 15:02:10.350762   72639 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 15:02:10.350812   72639 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 15:02:10.350819   72639 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 15:02:10.350837   72639 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 15:02:10.350887   72639 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-399767 san=[127.0.0.1 192.168.72.138 localhost minikube old-k8s-version-399767]
	I1014 15:02:10.602118   72639 provision.go:177] copyRemoteCerts
	I1014 15:02:10.602175   72639 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 15:02:10.602199   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:10.604519   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.604744   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.604776   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.604946   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:10.605127   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.605273   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:10.605403   72639 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa Username:docker}
	I1014 15:02:10.689081   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 15:02:10.713512   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1014 15:02:10.738086   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 15:02:10.762274   72639 provision.go:87] duration metric: took 417.977128ms to configureAuth
	I1014 15:02:10.762307   72639 buildroot.go:189] setting minikube options for container-runtime
	I1014 15:02:10.762486   72639 config.go:182] Loaded profile config "old-k8s-version-399767": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1014 15:02:10.762552   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:10.765134   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.765442   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.765469   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.765600   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:10.765756   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.765903   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.765998   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:10.766131   72639 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:10.766297   72639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1014 15:02:10.766311   72639 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 15:02:11.011252   72639 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 15:02:11.011279   72639 machine.go:96] duration metric: took 1.027069423s to provisionDockerMachine
	I1014 15:02:11.011292   72639 start.go:293] postStartSetup for "old-k8s-version-399767" (driver="kvm2")
	I1014 15:02:11.011304   72639 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 15:02:11.011349   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:11.011716   72639 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 15:02:11.011751   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:11.014418   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.014754   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:11.014790   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.014946   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:11.015125   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:11.015260   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:11.015376   72639 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa Username:docker}
	I1014 15:02:11.097883   72639 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 15:02:11.102452   72639 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 15:02:11.102481   72639 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 15:02:11.102551   72639 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 15:02:11.102687   72639 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 15:02:11.102781   72639 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 15:02:11.112774   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:02:11.138211   72639 start.go:296] duration metric: took 126.906035ms for postStartSetup
	I1014 15:02:11.138247   72639 fix.go:56] duration metric: took 18.958741429s for fixHost
	I1014 15:02:11.138270   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:11.140740   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.141100   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:11.141139   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.141280   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:11.141484   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:11.141668   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:11.141811   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:11.141974   72639 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:11.142131   72639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1014 15:02:11.142141   72639 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 15:02:11.248330   72639 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728918131.224010283
	
	I1014 15:02:11.248355   72639 fix.go:216] guest clock: 1728918131.224010283
	I1014 15:02:11.248373   72639 fix.go:229] Guest: 2024-10-14 15:02:11.224010283 +0000 UTC Remote: 2024-10-14 15:02:11.138252894 +0000 UTC m=+233.173555624 (delta=85.757389ms)
	I1014 15:02:11.248399   72639 fix.go:200] guest clock delta is within tolerance: 85.757389ms
	I1014 15:02:11.248406   72639 start.go:83] releasing machines lock for "old-k8s-version-399767", held for 19.068928968s
	I1014 15:02:11.248434   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:11.248692   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetIP
	I1014 15:02:11.251774   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.252134   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:11.252176   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.252358   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:11.252840   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:11.253017   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:11.253104   72639 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 15:02:11.253150   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:11.253232   72639 ssh_runner.go:195] Run: cat /version.json
	I1014 15:02:11.253259   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:11.256105   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.256339   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.256504   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:11.256529   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.256662   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:11.256732   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:11.256771   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.256844   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:11.256932   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:11.257003   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:11.257141   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:11.257131   72639 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa Username:docker}
	I1014 15:02:11.257296   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:11.257414   72639 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa Username:docker}
	I1014 15:02:11.363838   72639 ssh_runner.go:195] Run: systemctl --version
	I1014 15:02:11.370414   72639 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 15:02:11.521232   72639 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 15:02:11.527623   72639 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 15:02:11.527712   72639 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 15:02:11.544532   72639 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 15:02:11.544559   72639 start.go:495] detecting cgroup driver to use...
	I1014 15:02:11.544614   72639 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 15:02:11.561693   72639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 15:02:11.576555   72639 docker.go:217] disabling cri-docker service (if available) ...
	I1014 15:02:11.576622   72639 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 15:02:11.593830   72639 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 15:02:11.608785   72639 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 15:02:11.731034   72639 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 15:02:11.909278   72639 docker.go:233] disabling docker service ...
	I1014 15:02:11.909359   72639 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 15:02:11.931218   72639 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 15:02:11.951710   72639 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 15:02:12.103012   72639 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 15:02:12.252290   72639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 15:02:12.270497   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 15:02:12.293240   72639 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1014 15:02:12.293297   72639 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:12.304881   72639 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 15:02:12.304958   72639 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:12.316294   72639 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:12.328591   72639 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:12.340085   72639 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 15:02:12.351765   72639 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 15:02:12.362454   72639 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 15:02:12.362525   72639 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 15:02:12.376865   72639 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 15:02:12.387779   72639 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:02:12.528541   72639 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 15:02:12.635262   72639 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 15:02:12.635335   72639 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 15:02:12.641070   72639 start.go:563] Will wait 60s for crictl version
	I1014 15:02:12.641121   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:12.645111   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 15:02:12.691103   72639 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 15:02:12.691199   72639 ssh_runner.go:195] Run: crio --version
	I1014 15:02:12.720182   72639 ssh_runner.go:195] Run: crio --version
	I1014 15:02:12.754856   72639 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1014 15:02:12.756005   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetIP
	I1014 15:02:12.759369   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:12.759890   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:12.759924   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:12.760164   72639 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1014 15:02:12.765342   72639 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:02:12.782182   72639 kubeadm.go:883] updating cluster {Name:old-k8s-version-399767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-399767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.138 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 15:02:12.782307   72639 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1014 15:02:12.782374   72639 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:02:12.841797   72639 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1014 15:02:12.841871   72639 ssh_runner.go:195] Run: which lz4
	I1014 15:02:12.846193   72639 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 15:02:12.850982   72639 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 15:02:12.851019   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1014 15:02:09.636366   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:11.637804   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:13.638684   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:11.272626   71679 main.go:141] libmachine: (no-preload-813300) Calling .Start
	I1014 15:02:11.272827   71679 main.go:141] libmachine: (no-preload-813300) Ensuring networks are active...
	I1014 15:02:11.273510   71679 main.go:141] libmachine: (no-preload-813300) Ensuring network default is active
	I1014 15:02:11.273954   71679 main.go:141] libmachine: (no-preload-813300) Ensuring network mk-no-preload-813300 is active
	I1014 15:02:11.274410   71679 main.go:141] libmachine: (no-preload-813300) Getting domain xml...
	I1014 15:02:11.275263   71679 main.go:141] libmachine: (no-preload-813300) Creating domain...
	I1014 15:02:12.614590   71679 main.go:141] libmachine: (no-preload-813300) Waiting to get IP...
	I1014 15:02:12.615572   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:12.616018   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:12.616092   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:12.616013   73776 retry.go:31] will retry after 302.312986ms: waiting for machine to come up
	I1014 15:02:12.919678   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:12.920039   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:12.920074   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:12.920005   73776 retry.go:31] will retry after 371.392955ms: waiting for machine to come up
	I1014 15:02:13.292596   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:13.293214   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:13.293244   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:13.293164   73776 retry.go:31] will retry after 299.379251ms: waiting for machine to come up
	I1014 15:02:13.594808   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:13.595344   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:13.595370   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:13.595297   73776 retry.go:31] will retry after 598.480386ms: waiting for machine to come up
	I1014 15:02:14.195149   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:14.195744   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:14.195775   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:14.195696   73776 retry.go:31] will retry after 567.581822ms: waiting for machine to come up
	I1014 15:02:14.764315   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:14.764863   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:14.764886   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:14.764815   73776 retry.go:31] will retry after 587.597591ms: waiting for machine to come up
	I1014 15:02:15.353495   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:15.353948   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:15.353980   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:15.353896   73776 retry.go:31] will retry after 1.024496536s: waiting for machine to come up
	I1014 15:02:11.889135   72390 node_ready.go:53] node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:13.889200   72390 node_ready.go:49] node "default-k8s-diff-port-201291" has status "Ready":"True"
	I1014 15:02:13.889228   72390 node_ready.go:38] duration metric: took 6.504919545s for node "default-k8s-diff-port-201291" to be "Ready" ...
	I1014 15:02:13.889240   72390 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:02:13.898112   72390 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:15.907127   72390 pod_ready.go:103] pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:14.579304   72639 crio.go:462] duration metric: took 1.733147869s to copy over tarball
	I1014 15:02:14.579405   72639 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 15:02:17.644891   72639 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.06545265s)
	I1014 15:02:17.644954   72639 crio.go:469] duration metric: took 3.065620277s to extract the tarball
	I1014 15:02:17.644979   72639 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 15:02:17.688304   72639 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:02:17.727862   72639 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1014 15:02:17.727888   72639 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1014 15:02:17.727984   72639 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:17.727995   72639 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:17.728006   72639 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:17.728036   72639 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:17.727986   72639 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:17.728104   72639 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1014 15:02:17.728169   72639 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1014 15:02:17.728267   72639 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:17.729900   72639 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:17.729941   72639 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:17.729954   72639 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:17.729900   72639 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1014 15:02:17.729984   72639 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:17.729999   72639 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1014 15:02:17.729913   72639 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:17.730335   72639 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:17.889181   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:17.912728   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:17.919124   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:17.920117   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:17.934314   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1014 15:02:17.951143   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:17.956588   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1014 15:02:17.964968   72639 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1014 15:02:17.965031   72639 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:17.965066   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:16.139535   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:18.637888   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:16.379768   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:16.380165   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:16.380236   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:16.380142   73776 retry.go:31] will retry after 1.022289492s: waiting for machine to come up
	I1014 15:02:17.403892   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:17.404406   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:17.404430   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:17.404383   73776 retry.go:31] will retry after 1.277226075s: waiting for machine to come up
	I1014 15:02:18.683704   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:18.684176   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:18.684200   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:18.684126   73776 retry.go:31] will retry after 2.146714263s: waiting for machine to come up
	I1014 15:02:18.406707   72390 pod_ready.go:103] pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:20.412201   72390 pod_ready.go:103] pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:21.406229   72390 pod_ready.go:93] pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:21.406256   72390 pod_ready.go:82] duration metric: took 7.508120497s for pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.406269   72390 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.413868   72390 pod_ready.go:93] pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:21.413896   72390 pod_ready.go:82] duration metric: took 7.618897ms for pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.413910   72390 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:18.041388   72639 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1014 15:02:18.041436   72639 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:18.041489   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.041504   72639 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1014 15:02:18.041540   72639 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:18.041579   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.069534   72639 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1014 15:02:18.069582   72639 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1014 15:02:18.069631   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.069794   72639 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1014 15:02:18.069821   72639 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:18.069852   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.096492   72639 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1014 15:02:18.096536   72639 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:18.096575   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.104764   72639 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1014 15:02:18.104810   72639 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1014 15:02:18.104816   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:18.104854   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.104876   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:18.104885   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:18.104980   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:18.104984   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1014 15:02:18.105025   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:18.119784   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1014 15:02:18.213816   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:18.241644   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:18.288717   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:18.288820   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1014 15:02:18.288931   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:18.289005   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:18.295481   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1014 15:02:18.376936   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:18.393755   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:18.449717   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:18.449798   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:18.449824   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1014 15:02:18.449904   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:18.461905   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1014 15:02:18.508804   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1014 15:02:18.521502   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1014 15:02:18.612103   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1014 15:02:18.613450   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1014 15:02:18.613548   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1014 15:02:18.613625   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1014 15:02:18.613715   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1014 15:02:18.741774   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:18.888495   72639 cache_images.go:92] duration metric: took 1.16058525s to LoadCachedImages
	W1014 15:02:18.888578   72639 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I1014 15:02:18.888594   72639 kubeadm.go:934] updating node { 192.168.72.138 8443 v1.20.0 crio true true} ...
	I1014 15:02:18.888707   72639 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-399767 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.138
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-399767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 15:02:18.888791   72639 ssh_runner.go:195] Run: crio config
	I1014 15:02:18.943058   72639 cni.go:84] Creating CNI manager for ""
	I1014 15:02:18.943082   72639 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:02:18.943091   72639 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 15:02:18.943108   72639 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.138 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-399767 NodeName:old-k8s-version-399767 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.138"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.138 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1014 15:02:18.943225   72639 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.138
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-399767"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.138
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.138"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 15:02:18.943285   72639 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1014 15:02:18.956635   72639 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 15:02:18.956727   72639 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 15:02:18.970846   72639 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1014 15:02:18.992163   72639 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 15:02:19.012061   72639 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1014 15:02:19.033158   72639 ssh_runner.go:195] Run: grep 192.168.72.138	control-plane.minikube.internal$ /etc/hosts
	I1014 15:02:19.037195   72639 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.138	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:02:19.051127   72639 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:02:19.172992   72639 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:02:19.190545   72639 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767 for IP: 192.168.72.138
	I1014 15:02:19.190572   72639 certs.go:194] generating shared ca certs ...
	I1014 15:02:19.190592   72639 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:02:19.190786   72639 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 15:02:19.190843   72639 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 15:02:19.190853   72639 certs.go:256] generating profile certs ...
	I1014 15:02:19.190973   72639 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/client.key
	I1014 15:02:19.191053   72639 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/apiserver.key.c5ef93ea
	I1014 15:02:19.191108   72639 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/proxy-client.key
	I1014 15:02:19.191264   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 15:02:19.191302   72639 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 15:02:19.191314   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 15:02:19.191345   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 15:02:19.191374   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 15:02:19.191423   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 15:02:19.191477   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:02:19.192328   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 15:02:19.248981   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 15:02:19.281262   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 15:02:19.312859   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 15:02:19.351940   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1014 15:02:19.405710   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 15:02:19.441313   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 15:02:19.481774   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 15:02:19.509433   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 15:02:19.537994   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 15:02:19.564460   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 15:02:19.593632   72639 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 15:02:19.614775   72639 ssh_runner.go:195] Run: openssl version
	I1014 15:02:19.623548   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 15:02:19.636680   72639 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:02:19.642225   72639 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:02:19.642286   72639 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:02:19.648609   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 15:02:19.661130   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 15:02:19.672988   72639 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 15:02:19.678119   72639 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 15:02:19.678189   72639 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 15:02:19.684583   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 15:02:19.696685   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 15:02:19.708338   72639 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 15:02:19.713443   72639 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 15:02:19.713502   72639 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 15:02:19.719482   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 15:02:19.731720   72639 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 15:02:19.739006   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 15:02:19.747558   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 15:02:19.756399   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 15:02:19.764987   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 15:02:19.773320   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 15:02:19.781239   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 15:02:19.788638   72639 kubeadm.go:392] StartCluster: {Name:old-k8s-version-399767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-399767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.138 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 15:02:19.788753   72639 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 15:02:19.788810   72639 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:02:19.829586   72639 cri.go:89] found id: ""
	I1014 15:02:19.829641   72639 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 15:02:19.844632   72639 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1014 15:02:19.844654   72639 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1014 15:02:19.844708   72639 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 15:02:19.860547   72639 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 15:02:19.861848   72639 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-399767" does not appear in /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 15:02:19.862755   72639 kubeconfig.go:62] /home/jenkins/minikube-integration/19790-7836/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-399767" cluster setting kubeconfig missing "old-k8s-version-399767" context setting]
	I1014 15:02:19.863757   72639 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/kubeconfig: {Name:mk17dd47b52fd7a8ee17f563c35b22ef1b7788f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:02:19.927447   72639 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 15:02:19.940830   72639 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.138
	I1014 15:02:19.940919   72639 kubeadm.go:1160] stopping kube-system containers ...
	I1014 15:02:19.940947   72639 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1014 15:02:19.941009   72639 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:02:19.983689   72639 cri.go:89] found id: ""
	I1014 15:02:19.983769   72639 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1014 15:02:20.007079   72639 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:02:20.023868   72639 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:02:20.023896   72639 kubeadm.go:157] found existing configuration files:
	
	I1014 15:02:20.023971   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:02:20.038661   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:02:20.038734   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:02:20.054357   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:02:20.068771   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:02:20.068843   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:02:20.081157   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:02:20.095416   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:02:20.095483   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:02:20.109099   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:02:20.120608   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:02:20.120680   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:02:20.133217   72639 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:02:20.145896   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:20.311840   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:21.472918   72639 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.161037865s)
	I1014 15:02:21.472953   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:21.739827   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:21.833423   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:21.931874   72639 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:02:21.931987   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:22.432595   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:22.932784   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:21.138446   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:23.636836   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:20.833532   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:20.833974   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:20.834000   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:20.833930   73776 retry.go:31] will retry after 1.936414638s: waiting for machine to come up
	I1014 15:02:22.771789   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:22.772183   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:22.772206   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:22.772148   73776 retry.go:31] will retry after 2.51581517s: waiting for machine to come up
	I1014 15:02:25.290082   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:25.290491   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:25.290518   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:25.290453   73776 retry.go:31] will retry after 3.279920525s: waiting for machine to come up
	I1014 15:02:21.420355   72390 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:21.420385   72390 pod_ready.go:82] duration metric: took 6.465669ms for pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.420398   72390 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.427723   72390 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:21.427747   72390 pod_ready.go:82] duration metric: took 7.340946ms for pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.427760   72390 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rh82t" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.433500   72390 pod_ready.go:93] pod "kube-proxy-rh82t" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:21.433526   72390 pod_ready.go:82] duration metric: took 5.757064ms for pod "kube-proxy-rh82t" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.433543   72390 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.802632   72390 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:21.802660   72390 pod_ready.go:82] duration metric: took 369.107697ms for pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.802672   72390 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:23.811046   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:26.308105   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:23.432728   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:23.932296   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:24.432079   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:24.932064   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:25.432201   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:25.932119   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:26.432423   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:26.932675   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:27.432633   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:27.932380   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:25.637287   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:28.137136   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:28.572901   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:28.573383   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:28.573421   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:28.573304   73776 retry.go:31] will retry after 5.283390724s: waiting for machine to come up
	I1014 15:02:28.310800   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:30.400310   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:28.432518   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:28.932871   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:29.432350   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:29.932761   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:30.432621   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:30.932873   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:31.432716   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:31.932364   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:32.432747   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:32.933039   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:30.637300   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:33.136858   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:33.858151   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.858626   71679 main.go:141] libmachine: (no-preload-813300) Found IP for machine: 192.168.61.13
	I1014 15:02:33.858660   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has current primary IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.858670   71679 main.go:141] libmachine: (no-preload-813300) Reserving static IP address...
	I1014 15:02:33.859001   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "no-preload-813300", mac: "52:54:00:ab:86:40", ip: "192.168.61.13"} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:33.859022   71679 main.go:141] libmachine: (no-preload-813300) Reserved static IP address: 192.168.61.13
	I1014 15:02:33.859040   71679 main.go:141] libmachine: (no-preload-813300) DBG | skip adding static IP to network mk-no-preload-813300 - found existing host DHCP lease matching {name: "no-preload-813300", mac: "52:54:00:ab:86:40", ip: "192.168.61.13"}
	I1014 15:02:33.859055   71679 main.go:141] libmachine: (no-preload-813300) DBG | Getting to WaitForSSH function...
	I1014 15:02:33.859065   71679 main.go:141] libmachine: (no-preload-813300) Waiting for SSH to be available...
	I1014 15:02:33.860949   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.861245   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:33.861287   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.861398   71679 main.go:141] libmachine: (no-preload-813300) DBG | Using SSH client type: external
	I1014 15:02:33.861424   71679 main.go:141] libmachine: (no-preload-813300) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa (-rw-------)
	I1014 15:02:33.861460   71679 main.go:141] libmachine: (no-preload-813300) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.13 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 15:02:33.861476   71679 main.go:141] libmachine: (no-preload-813300) DBG | About to run SSH command:
	I1014 15:02:33.861488   71679 main.go:141] libmachine: (no-preload-813300) DBG | exit 0
	I1014 15:02:33.991450   71679 main.go:141] libmachine: (no-preload-813300) DBG | SSH cmd err, output: <nil>: 
	I1014 15:02:33.991854   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetConfigRaw
	I1014 15:02:33.992623   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetIP
	I1014 15:02:33.995514   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.995884   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:33.995908   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.996225   71679 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/config.json ...
	I1014 15:02:33.996549   71679 machine.go:93] provisionDockerMachine start ...
	I1014 15:02:33.996572   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:33.996784   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:33.999385   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.999751   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:33.999789   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.999948   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.000135   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.000312   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.000455   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.000648   71679 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:34.000874   71679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.13 22 <nil> <nil>}
	I1014 15:02:34.000890   71679 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 15:02:34.114981   71679 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 15:02:34.115014   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetMachineName
	I1014 15:02:34.115245   71679 buildroot.go:166] provisioning hostname "no-preload-813300"
	I1014 15:02:34.115272   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetMachineName
	I1014 15:02:34.115421   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.117557   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.117890   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.117929   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.118027   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.118210   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.118365   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.118524   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.118720   71679 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:34.118913   71679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.13 22 <nil> <nil>}
	I1014 15:02:34.118932   71679 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-813300 && echo "no-preload-813300" | sudo tee /etc/hostname
	I1014 15:02:34.246092   71679 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-813300
	
	I1014 15:02:34.246149   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.248672   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.249095   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.249122   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.249331   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.249505   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.249687   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.249860   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.250061   71679 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:34.250272   71679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.13 22 <nil> <nil>}
	I1014 15:02:34.250297   71679 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-813300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-813300/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-813300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 15:02:34.373470   71679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 15:02:34.373512   71679 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 15:02:34.373576   71679 buildroot.go:174] setting up certificates
	I1014 15:02:34.373594   71679 provision.go:84] configureAuth start
	I1014 15:02:34.373613   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetMachineName
	I1014 15:02:34.373903   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetIP
	I1014 15:02:34.376697   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.376986   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.377009   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.377137   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.379469   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.379813   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.379838   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.379981   71679 provision.go:143] copyHostCerts
	I1014 15:02:34.380034   71679 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 15:02:34.380050   71679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 15:02:34.380106   71679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 15:02:34.380194   71679 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 15:02:34.380201   71679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 15:02:34.380223   71679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 15:02:34.380282   71679 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 15:02:34.380288   71679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 15:02:34.380305   71679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 15:02:34.380362   71679 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.no-preload-813300 san=[127.0.0.1 192.168.61.13 localhost minikube no-preload-813300]
	I1014 15:02:34.421281   71679 provision.go:177] copyRemoteCerts
	I1014 15:02:34.421331   71679 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 15:02:34.421353   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.423903   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.424219   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.424248   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.424471   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.424665   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.424807   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.424948   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:02:34.512847   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 15:02:34.539814   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1014 15:02:34.568946   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 15:02:34.593444   71679 provision.go:87] duration metric: took 219.83393ms to configureAuth
	I1014 15:02:34.593467   71679 buildroot.go:189] setting minikube options for container-runtime
	I1014 15:02:34.593661   71679 config.go:182] Loaded profile config "no-preload-813300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 15:02:34.593744   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.596317   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.596626   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.596659   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.596819   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.597008   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.597159   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.597295   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.597433   71679 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:34.597611   71679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.13 22 <nil> <nil>}
	I1014 15:02:34.597631   71679 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 15:02:34.837224   71679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 15:02:34.837244   71679 machine.go:96] duration metric: took 840.680679ms to provisionDockerMachine
	I1014 15:02:34.837256   71679 start.go:293] postStartSetup for "no-preload-813300" (driver="kvm2")
	I1014 15:02:34.837265   71679 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 15:02:34.837281   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:34.837593   71679 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 15:02:34.837625   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.840357   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.840677   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.840702   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.840845   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.841025   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.841193   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.841363   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:02:34.930754   71679 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 15:02:34.935428   71679 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 15:02:34.935457   71679 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 15:02:34.935541   71679 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 15:02:34.935659   71679 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 15:02:34.935795   71679 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 15:02:34.946363   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:02:34.973029   71679 start.go:296] duration metric: took 135.76066ms for postStartSetup
	I1014 15:02:34.973074   71679 fix.go:56] duration metric: took 23.72449375s for fixHost
	I1014 15:02:34.973098   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.975897   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.976211   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.976237   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.976487   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.976687   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.976813   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.976923   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.977075   71679 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:34.977294   71679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.13 22 <nil> <nil>}
	I1014 15:02:34.977309   71679 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 15:02:35.091556   71679 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728918155.078304162
	
	I1014 15:02:35.091581   71679 fix.go:216] guest clock: 1728918155.078304162
	I1014 15:02:35.091590   71679 fix.go:229] Guest: 2024-10-14 15:02:35.078304162 +0000 UTC Remote: 2024-10-14 15:02:34.973079478 +0000 UTC m=+359.485826316 (delta=105.224684ms)
	I1014 15:02:35.091610   71679 fix.go:200] guest clock delta is within tolerance: 105.224684ms
	I1014 15:02:35.091616   71679 start.go:83] releasing machines lock for "no-preload-813300", held for 23.843071366s
	I1014 15:02:35.091641   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:35.091899   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetIP
	I1014 15:02:35.094383   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:35.094712   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:35.094733   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:35.094910   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:35.095353   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:35.095534   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:35.095589   71679 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 15:02:35.095658   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:35.095750   71679 ssh_runner.go:195] Run: cat /version.json
	I1014 15:02:35.095773   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:35.098288   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:35.098316   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:35.098680   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:35.098713   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:35.098743   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:35.098795   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:35.098835   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:35.099003   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:35.099186   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:35.099198   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:35.099367   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:35.099371   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:02:35.099513   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:35.099728   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:02:35.179961   71679 ssh_runner.go:195] Run: systemctl --version
	I1014 15:02:35.205523   71679 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 15:02:35.350662   71679 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 15:02:35.356870   71679 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 15:02:35.356941   71679 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 15:02:35.374967   71679 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 15:02:35.374997   71679 start.go:495] detecting cgroup driver to use...
	I1014 15:02:35.375067   71679 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 15:02:35.393194   71679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 15:02:35.408295   71679 docker.go:217] disabling cri-docker service (if available) ...
	I1014 15:02:35.408362   71679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 15:02:35.423927   71679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 15:02:35.438753   71679 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 15:02:32.809221   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:34.811962   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:35.567539   71679 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 15:02:35.702830   71679 docker.go:233] disabling docker service ...
	I1014 15:02:35.702916   71679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 15:02:35.720822   71679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 15:02:35.735403   71679 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 15:02:35.880532   71679 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 15:02:36.003343   71679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 15:02:36.018230   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 15:02:36.037065   71679 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 15:02:36.037134   71679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.047820   71679 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 15:02:36.047880   71679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.058531   71679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.069760   71679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.081047   71679 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 15:02:36.092384   71679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.103241   71679 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.121771   71679 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.132886   71679 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 15:02:36.143239   71679 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 15:02:36.143308   71679 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 15:02:36.156582   71679 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 15:02:36.165955   71679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:02:36.283857   71679 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 15:02:36.388165   71679 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 15:02:36.388243   71679 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 15:02:36.393324   71679 start.go:563] Will wait 60s for crictl version
	I1014 15:02:36.393378   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:36.397236   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 15:02:36.444749   71679 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 15:02:36.444839   71679 ssh_runner.go:195] Run: crio --version
	I1014 15:02:36.474831   71679 ssh_runner.go:195] Run: crio --version
	I1014 15:02:36.520531   71679 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1014 15:02:33.432474   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:33.932719   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:34.432581   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:34.932863   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:35.432886   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:35.932915   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:36.432852   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:36.932367   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:37.432894   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:37.933035   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:35.637235   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:38.137613   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:36.521865   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetIP
	I1014 15:02:36.524566   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:36.524956   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:36.524984   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:36.525213   71679 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1014 15:02:36.529579   71679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:02:36.542554   71679 kubeadm.go:883] updating cluster {Name:no-preload-813300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.13 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 15:02:36.542701   71679 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 15:02:36.542737   71679 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:02:36.585681   71679 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1014 15:02:36.585719   71679 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1014 15:02:36.585806   71679 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:36.585838   71679 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:36.585865   71679 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:36.585886   71679 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1014 15:02:36.585925   71679 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:36.585814   71679 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:36.585954   71679 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:36.585843   71679 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:36.587263   71679 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:36.587290   71679 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:36.587289   71679 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:36.587289   71679 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:36.587289   71679 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:36.587326   71679 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:36.587289   71679 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:36.587274   71679 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1014 15:02:36.737070   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:36.750146   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:36.750401   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:36.767605   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1014 15:02:36.775005   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:36.797223   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:36.833657   71679 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I1014 15:02:36.833708   71679 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:36.833754   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:36.833875   71679 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I1014 15:02:36.833896   71679 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:36.833929   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:36.850009   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:36.911675   71679 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1014 15:02:36.911720   71679 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:36.911779   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:36.973319   71679 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1014 15:02:36.973354   71679 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:36.973383   71679 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I1014 15:02:36.973394   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:36.973414   71679 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:36.973453   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:36.973456   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:36.973519   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:36.973619   71679 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I1014 15:02:36.973640   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:36.973644   71679 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:36.973671   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:37.044689   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:37.044739   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:37.044815   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:37.044860   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:37.044907   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:37.044947   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:37.166670   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:37.166737   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:37.166794   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:37.166908   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:37.166924   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:37.272802   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:37.272835   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:37.287078   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I1014 15:02:37.287167   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:37.287207   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1014 15:02:37.287240   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1014 15:02:37.287293   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1014 15:02:37.287320   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1014 15:02:37.287367   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1014 15:02:37.354510   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:37.354621   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1014 15:02:37.354659   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I1014 15:02:37.354676   71679 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1014 15:02:37.354700   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1014 15:02:37.354711   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1014 15:02:37.354719   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1014 15:02:37.354790   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I1014 15:02:37.354812   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I1014 15:02:37.354865   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I1014 15:02:37.532403   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:39.443614   71679 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1: (2.089069189s)
	I1014 15:02:39.443676   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I1014 15:02:39.443766   71679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.089027703s)
	I1014 15:02:39.443790   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I1014 15:02:39.443775   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1014 15:02:39.443813   71679 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1014 15:02:39.443833   71679 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.089105476s)
	I1014 15:02:39.443854   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1014 15:02:39.443861   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1014 15:02:39.443911   71679 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.089031069s)
	I1014 15:02:39.443933   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I1014 15:02:39.443986   71679 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.911557292s)
	I1014 15:02:39.444029   71679 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1014 15:02:39.444057   71679 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:39.444111   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:37.309522   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:39.809526   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:38.432551   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:38.932486   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:39.432591   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:39.932694   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:40.432065   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:40.932044   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:41.432313   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:41.933055   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:42.432453   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:42.932258   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:40.137656   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:42.637462   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:41.514958   71679 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.071133048s)
	I1014 15:02:41.514987   71679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.071109487s)
	I1014 15:02:41.515016   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1014 15:02:41.515041   71679 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1014 15:02:41.515046   71679 ssh_runner.go:235] Completed: which crictl: (2.070916553s)
	I1014 15:02:41.514994   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I1014 15:02:41.515093   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1014 15:02:41.515105   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:41.569878   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:43.401013   71679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.885889648s)
	I1014 15:02:43.401053   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I1014 15:02:43.401068   71679 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.831164682s)
	I1014 15:02:43.401082   71679 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1014 15:02:43.401131   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:43.401139   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1014 15:02:41.809862   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:43.810054   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:45.810567   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:43.432054   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:43.932139   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:44.432261   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:44.932517   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:45.432959   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:45.933103   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:46.432845   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:46.932825   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:47.432059   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:47.932745   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:44.639020   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:47.136927   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:49.137423   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:46.799144   71679 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.397987929s)
	I1014 15:02:46.799198   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1014 15:02:46.799201   71679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.398044957s)
	I1014 15:02:46.799222   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1014 15:02:46.799249   71679 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I1014 15:02:46.799295   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1014 15:02:46.799296   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I1014 15:02:46.804398   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1014 15:02:48.971377   71679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.171989764s)
	I1014 15:02:48.971409   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I1014 15:02:48.971436   71679 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1014 15:02:48.971481   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1014 15:02:48.309980   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:50.311361   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:48.432869   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:48.932514   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:49.432754   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:49.932514   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:50.432199   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:50.932861   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:51.432404   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:51.932097   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:52.432569   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:52.933078   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:51.141481   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:53.638306   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:50.935341   71679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.963834471s)
	I1014 15:02:50.935373   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I1014 15:02:50.935401   71679 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1014 15:02:50.935452   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1014 15:02:51.683211   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1014 15:02:51.683268   71679 cache_images.go:123] Successfully loaded all cached images
	I1014 15:02:51.683277   71679 cache_images.go:92] duration metric: took 15.097525447s to LoadCachedImages
	I1014 15:02:51.683293   71679 kubeadm.go:934] updating node { 192.168.61.13 8443 v1.31.1 crio true true} ...
	I1014 15:02:51.683441   71679 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-813300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 15:02:51.683525   71679 ssh_runner.go:195] Run: crio config
	I1014 15:02:51.737769   71679 cni.go:84] Creating CNI manager for ""
	I1014 15:02:51.737790   71679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:02:51.737799   71679 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 15:02:51.737818   71679 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.13 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-813300 NodeName:no-preload-813300 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 15:02:51.737955   71679 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-813300"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.13"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.13"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 15:02:51.738019   71679 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 15:02:51.749175   71679 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 15:02:51.749241   71679 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 15:02:51.759120   71679 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1014 15:02:51.777293   71679 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 15:02:51.795073   71679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I1014 15:02:51.815094   71679 ssh_runner.go:195] Run: grep 192.168.61.13	control-plane.minikube.internal$ /etc/hosts
	I1014 15:02:51.819087   71679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:02:51.831806   71679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:02:51.953191   71679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:02:51.972342   71679 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300 for IP: 192.168.61.13
	I1014 15:02:51.972362   71679 certs.go:194] generating shared ca certs ...
	I1014 15:02:51.972379   71679 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:02:51.972534   71679 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 15:02:51.972583   71679 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 15:02:51.972597   71679 certs.go:256] generating profile certs ...
	I1014 15:02:51.972732   71679 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/client.key
	I1014 15:02:51.972822   71679 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/apiserver.key.4d535e2d
	I1014 15:02:51.972885   71679 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/proxy-client.key
	I1014 15:02:51.973064   71679 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 15:02:51.973102   71679 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 15:02:51.973111   71679 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 15:02:51.973151   71679 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 15:02:51.973180   71679 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 15:02:51.973203   71679 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 15:02:51.973260   71679 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:02:51.974077   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 15:02:52.019451   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 15:02:52.048323   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 15:02:52.086241   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 15:02:52.129342   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 15:02:52.157243   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 15:02:52.189093   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 15:02:52.214980   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 15:02:52.241595   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 15:02:52.270329   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 15:02:52.295153   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 15:02:52.321303   71679 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 15:02:52.339181   71679 ssh_runner.go:195] Run: openssl version
	I1014 15:02:52.345152   71679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 15:02:52.357167   71679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:02:52.362387   71679 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:02:52.362442   71679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:02:52.369003   71679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 15:02:52.380917   71679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 15:02:52.392884   71679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 15:02:52.397876   71679 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 15:02:52.397942   71679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 15:02:52.404038   71679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 15:02:52.415841   71679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 15:02:52.426973   71679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 15:02:52.431848   71679 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 15:02:52.431914   71679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 15:02:52.439851   71679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 15:02:52.455014   71679 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 15:02:52.460088   71679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 15:02:52.466495   71679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 15:02:52.472659   71679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 15:02:52.483107   71679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 15:02:52.491272   71679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 15:02:52.497692   71679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 15:02:52.504352   71679 kubeadm.go:392] StartCluster: {Name:no-preload-813300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.13 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 15:02:52.504456   71679 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 15:02:52.504502   71679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:02:52.544010   71679 cri.go:89] found id: ""
	I1014 15:02:52.544074   71679 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 15:02:52.554296   71679 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1014 15:02:52.554314   71679 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1014 15:02:52.554364   71679 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 15:02:52.564193   71679 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 15:02:52.565367   71679 kubeconfig.go:125] found "no-preload-813300" server: "https://192.168.61.13:8443"
	I1014 15:02:52.567519   71679 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 15:02:52.577268   71679 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.13
	I1014 15:02:52.577296   71679 kubeadm.go:1160] stopping kube-system containers ...
	I1014 15:02:52.577305   71679 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1014 15:02:52.577343   71679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:02:52.614462   71679 cri.go:89] found id: ""
	I1014 15:02:52.614551   71679 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1014 15:02:52.631835   71679 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:02:52.642314   71679 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:02:52.642334   71679 kubeadm.go:157] found existing configuration files:
	
	I1014 15:02:52.642378   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:02:52.652036   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:02:52.652114   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:02:52.662263   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:02:52.672145   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:02:52.672214   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:02:52.682085   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:02:52.691628   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:02:52.691706   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:02:52.701314   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:02:52.711232   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:02:52.711291   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:02:52.722480   71679 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:02:52.733359   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:52.849407   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:53.647528   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:53.863718   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:53.938091   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:54.046445   71679 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:02:54.046544   71679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:54.546715   71679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:55.047285   71679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:55.062239   71679 api_server.go:72] duration metric: took 1.015804644s to wait for apiserver process to appear ...
	I1014 15:02:55.062265   71679 api_server.go:88] waiting for apiserver healthz status ...
	I1014 15:02:55.062296   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:02:55.062806   71679 api_server.go:269] stopped: https://192.168.61.13:8443/healthz: Get "https://192.168.61.13:8443/healthz": dial tcp 192.168.61.13:8443: connect: connection refused
	I1014 15:02:52.811186   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:55.309901   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:53.432335   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:53.932860   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:54.433105   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:54.933031   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:55.432058   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:55.932422   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:56.432618   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:56.932727   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:57.432265   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:57.932733   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:56.136357   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:58.136956   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:55.562748   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:02:58.274557   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 15:02:58.274587   71679 api_server.go:103] status: https://192.168.61.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 15:02:58.274625   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:02:58.296655   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 15:02:58.296682   71679 api_server.go:103] status: https://192.168.61.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 15:02:58.563094   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:02:58.567676   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:58.567717   71679 api_server.go:103] status: https://192.168.61.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:02:59.063266   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:02:59.067656   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:59.067697   71679 api_server.go:103] status: https://192.168.61.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:02:59.563300   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:02:59.569667   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:59.569699   71679 api_server.go:103] status: https://192.168.61.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:03:00.063305   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:03:00.067834   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 200:
	ok
	I1014 15:03:00.079522   71679 api_server.go:141] control plane version: v1.31.1
	I1014 15:03:00.079555   71679 api_server.go:131] duration metric: took 5.017283463s to wait for apiserver health ...
	I1014 15:03:00.079565   71679 cni.go:84] Creating CNI manager for ""
	I1014 15:03:00.079572   71679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:03:00.081793   71679 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 15:03:00.083132   71679 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 15:03:00.095329   71679 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 15:03:00.114972   71679 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 15:03:00.148816   71679 system_pods.go:59] 8 kube-system pods found
	I1014 15:03:00.148849   71679 system_pods.go:61] "coredns-7c65d6cfc9-5cft7" [43bb92da-74e8-4430-a889-3c23ed3fef67] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 15:03:00.148859   71679 system_pods.go:61] "etcd-no-preload-813300" [c3e9137c-855e-49e2-8891-8df57707f75a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 15:03:00.148867   71679 system_pods.go:61] "kube-apiserver-no-preload-813300" [683c2d48-6c84-470c-96e5-0706a1884ee7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 15:03:00.148872   71679 system_pods.go:61] "kube-controller-manager-no-preload-813300" [405991ef-9b48-4770-ba31-a213f0eae077] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 15:03:00.148882   71679 system_pods.go:61] "kube-proxy-jd4t4" [6c5c517b-855e-440c-976e-9c5e5d0710f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1014 15:03:00.148887   71679 system_pods.go:61] "kube-scheduler-no-preload-813300" [e76569e6-74c8-44dd-b283-a82072226686] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 15:03:00.148892   71679 system_pods.go:61] "metrics-server-6867b74b74-br4tl" [5b3425c6-9847-447d-a9ab-076c7cc1634f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:03:00.148896   71679 system_pods.go:61] "storage-provisioner" [2c52e790-afa9-4131-8e28-801eb3f822d5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 15:03:00.148906   71679 system_pods.go:74] duration metric: took 33.908487ms to wait for pod list to return data ...
	I1014 15:03:00.148918   71679 node_conditions.go:102] verifying NodePressure condition ...
	I1014 15:03:00.161000   71679 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 15:03:00.161029   71679 node_conditions.go:123] node cpu capacity is 2
	I1014 15:03:00.161042   71679 node_conditions.go:105] duration metric: took 12.118841ms to run NodePressure ...
	I1014 15:03:00.161067   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:03:00.510702   71679 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1014 15:03:00.515692   71679 kubeadm.go:739] kubelet initialised
	I1014 15:03:00.515715   71679 kubeadm.go:740] duration metric: took 4.986873ms waiting for restarted kubelet to initialise ...
	I1014 15:03:00.515724   71679 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:03:00.521483   71679 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-5cft7" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:57.810518   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:59.811287   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:58.432774   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:58.932666   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:59.433020   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:59.932671   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:00.432717   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:00.932917   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:01.432735   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:01.932668   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:02.432260   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:02.932075   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:00.137257   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:02.137876   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:02.528402   71679 pod_ready.go:103] pod "coredns-7c65d6cfc9-5cft7" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:04.530210   71679 pod_ready.go:93] pod "coredns-7c65d6cfc9-5cft7" in "kube-system" namespace has status "Ready":"True"
	I1014 15:03:04.530241   71679 pod_ready.go:82] duration metric: took 4.008725187s for pod "coredns-7c65d6cfc9-5cft7" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:04.530254   71679 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:02.309134   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:04.311421   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:03.432139   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:03.932241   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:04.432421   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:04.932869   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:05.432972   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:05.933010   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:06.432409   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:06.932778   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:07.432067   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:07.932749   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:04.636760   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:07.136410   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:09.137483   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:06.537318   71679 pod_ready.go:103] pod "etcd-no-preload-813300" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:09.037462   71679 pod_ready.go:103] pod "etcd-no-preload-813300" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:06.810244   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:08.810932   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:10.813334   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:08.432529   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:08.932034   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:09.432042   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:09.933054   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:10.432938   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:10.932661   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:11.432392   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:11.932068   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:12.432066   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:12.932122   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:11.636654   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:13.637819   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:10.536905   71679 pod_ready.go:93] pod "etcd-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:03:10.536932   71679 pod_ready.go:82] duration metric: took 6.006669219s for pod "etcd-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:10.536945   71679 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:12.551283   71679 pod_ready.go:103] pod "kube-apiserver-no-preload-813300" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:13.044142   71679 pod_ready.go:93] pod "kube-apiserver-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:03:13.044166   71679 pod_ready.go:82] duration metric: took 2.507213726s for pod "kube-apiserver-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.044176   71679 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.049176   71679 pod_ready.go:93] pod "kube-controller-manager-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:03:13.049196   71679 pod_ready.go:82] duration metric: took 5.01377ms for pod "kube-controller-manager-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.049206   71679 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-jd4t4" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.053623   71679 pod_ready.go:93] pod "kube-proxy-jd4t4" in "kube-system" namespace has status "Ready":"True"
	I1014 15:03:13.053646   71679 pod_ready.go:82] duration metric: took 4.434586ms for pod "kube-proxy-jd4t4" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.053654   71679 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.559610   71679 pod_ready.go:93] pod "kube-scheduler-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:03:13.559632   71679 pod_ready.go:82] duration metric: took 505.972722ms for pod "kube-scheduler-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.559642   71679 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.309520   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:15.309622   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:13.432556   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:13.932427   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:14.432053   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:14.932460   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:15.432714   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:15.933071   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:16.432567   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:16.932414   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:17.432985   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:17.932960   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:16.136599   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:18.137964   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:15.566234   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:17.567065   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:20.066221   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:17.309837   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:19.310194   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:18.433026   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:18.932015   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:19.432042   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:19.932030   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:20.433050   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:20.932658   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:21.432667   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:21.933045   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:21.933127   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:21.973476   72639 cri.go:89] found id: ""
	I1014 15:03:21.973507   72639 logs.go:282] 0 containers: []
	W1014 15:03:21.973517   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:21.973523   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:21.973584   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:22.011700   72639 cri.go:89] found id: ""
	I1014 15:03:22.011732   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.011742   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:22.011748   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:22.011814   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:22.047721   72639 cri.go:89] found id: ""
	I1014 15:03:22.047744   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.047752   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:22.047762   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:22.047814   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:22.091618   72639 cri.go:89] found id: ""
	I1014 15:03:22.091644   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.091652   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:22.091657   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:22.091706   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:22.129997   72639 cri.go:89] found id: ""
	I1014 15:03:22.130036   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.130047   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:22.130055   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:22.130114   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:22.168024   72639 cri.go:89] found id: ""
	I1014 15:03:22.168053   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.168061   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:22.168067   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:22.168136   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:22.202633   72639 cri.go:89] found id: ""
	I1014 15:03:22.202660   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.202670   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:22.202677   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:22.202739   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:22.238224   72639 cri.go:89] found id: ""
	I1014 15:03:22.238251   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.238259   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:22.238267   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:22.238278   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:22.251940   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:22.251991   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:22.379777   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:22.379799   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:22.379814   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:22.456468   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:22.456507   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:22.495404   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:22.495433   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:20.636995   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:22.637141   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:22.066371   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:24.566023   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:21.809579   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:24.309010   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:25.048061   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:25.068586   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:25.068658   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:25.121199   72639 cri.go:89] found id: ""
	I1014 15:03:25.121228   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.121237   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:25.121243   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:25.121303   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:25.174705   72639 cri.go:89] found id: ""
	I1014 15:03:25.174738   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.174749   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:25.174757   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:25.174815   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:25.236972   72639 cri.go:89] found id: ""
	I1014 15:03:25.237002   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.237013   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:25.237020   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:25.237077   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:25.276443   72639 cri.go:89] found id: ""
	I1014 15:03:25.276473   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.276483   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:25.276489   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:25.276541   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:25.314573   72639 cri.go:89] found id: ""
	I1014 15:03:25.314623   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.314636   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:25.314645   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:25.314708   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:25.357489   72639 cri.go:89] found id: ""
	I1014 15:03:25.357515   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.357525   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:25.357533   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:25.357595   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:25.397504   72639 cri.go:89] found id: ""
	I1014 15:03:25.397527   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.397538   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:25.397546   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:25.397597   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:25.433139   72639 cri.go:89] found id: ""
	I1014 15:03:25.433162   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.433170   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:25.433179   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:25.433193   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:25.448088   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:25.448121   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:25.522377   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:25.522401   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:25.522415   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:25.595505   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:25.595538   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:25.643478   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:25.643511   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:25.137557   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:27.637096   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:27.067425   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:29.565568   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:26.809419   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:29.309193   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:31.310234   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:28.195236   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:28.208612   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:28.208686   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:28.248538   72639 cri.go:89] found id: ""
	I1014 15:03:28.248569   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.248581   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:28.248588   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:28.248652   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:28.286103   72639 cri.go:89] found id: ""
	I1014 15:03:28.286131   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.286143   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:28.286149   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:28.286209   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:28.321335   72639 cri.go:89] found id: ""
	I1014 15:03:28.321371   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.321383   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:28.321391   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:28.321453   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:28.358538   72639 cri.go:89] found id: ""
	I1014 15:03:28.358571   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.358581   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:28.358588   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:28.358661   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:28.397058   72639 cri.go:89] found id: ""
	I1014 15:03:28.397087   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.397099   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:28.397106   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:28.397175   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:28.434010   72639 cri.go:89] found id: ""
	I1014 15:03:28.434032   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.434040   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:28.434045   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:28.434095   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:28.474646   72639 cri.go:89] found id: ""
	I1014 15:03:28.474672   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.474681   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:28.474687   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:28.474736   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:28.512833   72639 cri.go:89] found id: ""
	I1014 15:03:28.512860   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.512871   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:28.512882   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:28.512894   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:28.526233   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:28.526262   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:28.601366   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:28.601393   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:28.601416   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:28.690261   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:28.690300   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:28.734134   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:28.734158   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:31.290184   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:31.303493   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:31.303558   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:31.341521   72639 cri.go:89] found id: ""
	I1014 15:03:31.341552   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.341563   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:31.341569   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:31.341627   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:31.378811   72639 cri.go:89] found id: ""
	I1014 15:03:31.378839   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.378851   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:31.378859   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:31.378922   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:31.416282   72639 cri.go:89] found id: ""
	I1014 15:03:31.416310   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.416321   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:31.416328   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:31.416392   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:31.456089   72639 cri.go:89] found id: ""
	I1014 15:03:31.456123   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.456134   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:31.456142   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:31.456202   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:31.496429   72639 cri.go:89] found id: ""
	I1014 15:03:31.496468   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.496478   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:31.496485   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:31.496548   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:31.535226   72639 cri.go:89] found id: ""
	I1014 15:03:31.535248   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.535256   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:31.535262   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:31.535321   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:31.572580   72639 cri.go:89] found id: ""
	I1014 15:03:31.572608   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.572623   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:31.572631   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:31.572691   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:31.606736   72639 cri.go:89] found id: ""
	I1014 15:03:31.606759   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.606766   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:31.606774   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:31.606785   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:31.646048   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:31.646078   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:31.696818   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:31.696851   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:31.710099   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:31.710128   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:31.787756   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:31.787783   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:31.787798   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:30.136436   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:32.138037   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:34.139660   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:31.566034   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:33.567029   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:33.809434   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:36.309487   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:34.369392   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:34.383263   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:34.383344   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:34.417763   72639 cri.go:89] found id: ""
	I1014 15:03:34.417797   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.417809   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:34.417816   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:34.417890   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:34.453361   72639 cri.go:89] found id: ""
	I1014 15:03:34.453391   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.453402   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:34.453409   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:34.453488   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:34.490878   72639 cri.go:89] found id: ""
	I1014 15:03:34.490905   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.490913   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:34.490919   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:34.490980   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:34.527554   72639 cri.go:89] found id: ""
	I1014 15:03:34.527584   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.527595   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:34.527603   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:34.527655   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:34.564813   72639 cri.go:89] found id: ""
	I1014 15:03:34.564841   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.564851   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:34.564857   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:34.564903   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:34.599899   72639 cri.go:89] found id: ""
	I1014 15:03:34.599930   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.599942   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:34.599949   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:34.600019   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:34.641686   72639 cri.go:89] found id: ""
	I1014 15:03:34.641717   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.641728   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:34.641735   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:34.641794   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:34.681154   72639 cri.go:89] found id: ""
	I1014 15:03:34.681184   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.681195   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:34.681205   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:34.681218   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:34.719638   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:34.719672   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:34.771687   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:34.771722   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:34.785943   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:34.785972   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:34.861821   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:34.861861   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:34.861875   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:37.441605   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:37.456763   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:37.456828   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:37.494176   72639 cri.go:89] found id: ""
	I1014 15:03:37.494202   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.494210   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:37.494216   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:37.494268   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:37.538802   72639 cri.go:89] found id: ""
	I1014 15:03:37.538834   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.538846   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:37.538853   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:37.538913   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:37.586282   72639 cri.go:89] found id: ""
	I1014 15:03:37.586312   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.586322   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:37.586328   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:37.586397   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:37.632673   72639 cri.go:89] found id: ""
	I1014 15:03:37.632698   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.632709   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:37.632715   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:37.632771   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:37.673340   72639 cri.go:89] found id: ""
	I1014 15:03:37.673364   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.673372   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:37.673377   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:37.673427   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:37.718725   72639 cri.go:89] found id: ""
	I1014 15:03:37.718750   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.718758   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:37.718764   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:37.718807   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:37.760560   72639 cri.go:89] found id: ""
	I1014 15:03:37.760587   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.760597   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:37.760605   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:37.760665   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:37.800912   72639 cri.go:89] found id: ""
	I1014 15:03:37.800941   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.800949   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:37.800957   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:37.800968   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:37.815338   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:37.815363   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:37.893018   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:37.893050   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:37.893067   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:37.978315   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:37.978349   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:36.637635   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:39.136295   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:36.065915   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:38.066310   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:38.810020   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:40.810460   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:38.019760   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:38.019788   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:40.570918   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:40.586058   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:40.586122   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:40.623753   72639 cri.go:89] found id: ""
	I1014 15:03:40.623784   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.623795   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:40.623801   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:40.623862   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:40.663909   72639 cri.go:89] found id: ""
	I1014 15:03:40.663937   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.663946   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:40.663953   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:40.664008   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:40.698572   72639 cri.go:89] found id: ""
	I1014 15:03:40.698615   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.698626   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:40.698633   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:40.698683   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:40.734882   72639 cri.go:89] found id: ""
	I1014 15:03:40.734907   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.734914   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:40.734920   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:40.734976   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:40.768429   72639 cri.go:89] found id: ""
	I1014 15:03:40.768455   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.768462   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:40.768468   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:40.768527   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:40.803429   72639 cri.go:89] found id: ""
	I1014 15:03:40.803456   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.803466   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:40.803474   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:40.803535   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:40.842854   72639 cri.go:89] found id: ""
	I1014 15:03:40.842883   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.842905   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:40.842913   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:40.842988   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:40.879638   72639 cri.go:89] found id: ""
	I1014 15:03:40.879661   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.879669   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:40.879677   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:40.879687   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:40.924949   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:40.924983   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:40.976271   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:40.976304   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:40.991492   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:40.991520   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:41.071418   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:41.071439   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:41.071453   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:41.136877   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:43.637356   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:40.566353   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:43.065982   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:45.066405   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:43.310188   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:45.811549   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:43.652387   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:43.666239   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:43.666317   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:43.705726   72639 cri.go:89] found id: ""
	I1014 15:03:43.705752   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.705761   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:43.705766   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:43.705814   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:43.745648   72639 cri.go:89] found id: ""
	I1014 15:03:43.745672   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.745680   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:43.745685   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:43.745731   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:43.783032   72639 cri.go:89] found id: ""
	I1014 15:03:43.783055   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.783063   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:43.783068   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:43.783115   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:43.820582   72639 cri.go:89] found id: ""
	I1014 15:03:43.820607   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.820617   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:43.820623   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:43.820669   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:43.862312   72639 cri.go:89] found id: ""
	I1014 15:03:43.862338   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.862348   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:43.862353   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:43.862404   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:43.898338   72639 cri.go:89] found id: ""
	I1014 15:03:43.898368   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.898379   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:43.898388   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:43.898448   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:43.934682   72639 cri.go:89] found id: ""
	I1014 15:03:43.934709   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.934719   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:43.934726   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:43.934781   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:43.970209   72639 cri.go:89] found id: ""
	I1014 15:03:43.970237   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.970247   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:43.970257   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:43.970269   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:44.024791   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:44.024832   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:44.038431   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:44.038457   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:44.117255   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:44.117291   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:44.117308   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:44.199397   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:44.199436   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:46.739819   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:46.755553   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:46.755625   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:46.797225   72639 cri.go:89] found id: ""
	I1014 15:03:46.797253   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.797265   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:46.797272   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:46.797335   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:46.832999   72639 cri.go:89] found id: ""
	I1014 15:03:46.833025   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.833036   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:46.833043   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:46.833103   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:46.872711   72639 cri.go:89] found id: ""
	I1014 15:03:46.872733   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.872741   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:46.872746   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:46.872795   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:46.909945   72639 cri.go:89] found id: ""
	I1014 15:03:46.909968   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.909977   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:46.909985   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:46.910046   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:46.946036   72639 cri.go:89] found id: ""
	I1014 15:03:46.946067   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.946080   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:46.946087   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:46.946141   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:46.981772   72639 cri.go:89] found id: ""
	I1014 15:03:46.981806   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.981819   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:46.981828   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:46.981896   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:47.022761   72639 cri.go:89] found id: ""
	I1014 15:03:47.022790   72639 logs.go:282] 0 containers: []
	W1014 15:03:47.022800   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:47.022807   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:47.022869   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:47.057368   72639 cri.go:89] found id: ""
	I1014 15:03:47.057392   72639 logs.go:282] 0 containers: []
	W1014 15:03:47.057400   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:47.057408   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:47.057418   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:47.134369   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:47.134408   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:47.179550   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:47.179586   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:47.233317   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:47.233355   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:47.247598   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:47.247629   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:47.321309   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:45.637760   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:48.136826   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:47.067003   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:49.565410   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:48.309520   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:50.812241   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:49.821955   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:49.836907   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:49.836975   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:49.876651   72639 cri.go:89] found id: ""
	I1014 15:03:49.876682   72639 logs.go:282] 0 containers: []
	W1014 15:03:49.876694   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:49.876713   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:49.876781   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:49.913440   72639 cri.go:89] found id: ""
	I1014 15:03:49.913464   72639 logs.go:282] 0 containers: []
	W1014 15:03:49.913473   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:49.913479   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:49.913535   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:49.949352   72639 cri.go:89] found id: ""
	I1014 15:03:49.949383   72639 logs.go:282] 0 containers: []
	W1014 15:03:49.949395   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:49.949402   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:49.949463   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:49.984599   72639 cri.go:89] found id: ""
	I1014 15:03:49.984629   72639 logs.go:282] 0 containers: []
	W1014 15:03:49.984641   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:49.984649   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:49.984709   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:50.028049   72639 cri.go:89] found id: ""
	I1014 15:03:50.028072   72639 logs.go:282] 0 containers: []
	W1014 15:03:50.028083   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:50.028090   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:50.028166   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:50.062272   72639 cri.go:89] found id: ""
	I1014 15:03:50.062294   72639 logs.go:282] 0 containers: []
	W1014 15:03:50.062302   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:50.062308   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:50.062358   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:50.099722   72639 cri.go:89] found id: ""
	I1014 15:03:50.099750   72639 logs.go:282] 0 containers: []
	W1014 15:03:50.099762   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:50.099769   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:50.099830   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:50.139984   72639 cri.go:89] found id: ""
	I1014 15:03:50.140005   72639 logs.go:282] 0 containers: []
	W1014 15:03:50.140013   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:50.140020   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:50.140032   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:50.218467   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:50.218500   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:50.260600   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:50.260635   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:50.313725   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:50.313757   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:50.328431   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:50.328462   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:50.401334   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:52.901787   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:52.917836   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:52.917902   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:52.955387   72639 cri.go:89] found id: ""
	I1014 15:03:52.955418   72639 logs.go:282] 0 containers: []
	W1014 15:03:52.955431   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:52.955440   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:52.955504   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:52.990890   72639 cri.go:89] found id: ""
	I1014 15:03:52.990924   72639 logs.go:282] 0 containers: []
	W1014 15:03:52.990936   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:52.990945   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:52.991004   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:50.636581   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:53.137639   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:51.566403   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:54.066690   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:53.310174   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:55.809402   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:53.032344   72639 cri.go:89] found id: ""
	I1014 15:03:53.032374   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.032384   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:53.032390   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:53.032458   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:53.073501   72639 cri.go:89] found id: ""
	I1014 15:03:53.073527   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.073537   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:53.073544   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:53.073602   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:53.114273   72639 cri.go:89] found id: ""
	I1014 15:03:53.114307   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.114316   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:53.114334   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:53.114389   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:53.155448   72639 cri.go:89] found id: ""
	I1014 15:03:53.155475   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.155484   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:53.155490   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:53.155539   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:53.191304   72639 cri.go:89] found id: ""
	I1014 15:03:53.191338   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.191350   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:53.191357   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:53.191438   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:53.224664   72639 cri.go:89] found id: ""
	I1014 15:03:53.224691   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.224702   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:53.224727   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:53.224744   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:53.275751   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:53.275786   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:53.289275   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:53.289303   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:53.369828   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:53.369855   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:53.369871   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:53.457248   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:53.457285   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:56.003384   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:56.017722   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:56.017782   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:56.056644   72639 cri.go:89] found id: ""
	I1014 15:03:56.056675   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.056686   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:56.056694   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:56.056757   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:56.094482   72639 cri.go:89] found id: ""
	I1014 15:03:56.094507   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.094517   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:56.094524   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:56.094583   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:56.129884   72639 cri.go:89] found id: ""
	I1014 15:03:56.129913   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.129921   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:56.129926   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:56.129974   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:56.167171   72639 cri.go:89] found id: ""
	I1014 15:03:56.167198   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.167206   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:56.167211   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:56.167264   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:56.204400   72639 cri.go:89] found id: ""
	I1014 15:03:56.204433   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.204442   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:56.204447   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:56.204494   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:56.240407   72639 cri.go:89] found id: ""
	I1014 15:03:56.240437   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.240448   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:56.240456   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:56.240517   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:56.277653   72639 cri.go:89] found id: ""
	I1014 15:03:56.277679   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.277687   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:56.277693   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:56.277738   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:56.313423   72639 cri.go:89] found id: ""
	I1014 15:03:56.313451   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.313459   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:56.313468   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:56.313480   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:56.368094   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:56.368133   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:56.382563   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:56.382621   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:56.455106   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:56.455130   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:56.455144   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:56.532288   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:56.532329   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:55.636007   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:57.637196   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:56.566763   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:59.066227   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:58.309184   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:00.309370   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:59.072469   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:59.089024   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:59.089094   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:59.130798   72639 cri.go:89] found id: ""
	I1014 15:03:59.130829   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.130840   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:59.130848   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:59.130908   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:59.167828   72639 cri.go:89] found id: ""
	I1014 15:03:59.167854   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.167864   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:59.167871   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:59.167932   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:59.223482   72639 cri.go:89] found id: ""
	I1014 15:03:59.223509   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.223520   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:59.223528   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:59.223590   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:59.261186   72639 cri.go:89] found id: ""
	I1014 15:03:59.261231   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.261243   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:59.261251   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:59.261314   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:59.296924   72639 cri.go:89] found id: ""
	I1014 15:03:59.296985   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.297000   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:59.297008   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:59.297084   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:59.333891   72639 cri.go:89] found id: ""
	I1014 15:03:59.333915   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.333923   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:59.333929   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:59.333991   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:59.374106   72639 cri.go:89] found id: ""
	I1014 15:03:59.374134   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.374143   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:59.374150   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:59.374222   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:59.412256   72639 cri.go:89] found id: ""
	I1014 15:03:59.412283   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.412291   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:59.412298   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:59.412308   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:59.492869   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:59.492904   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:59.492923   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:59.576441   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:59.576473   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:59.618638   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:59.618668   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:59.671295   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:59.671331   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:02.184689   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:02.197763   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:02.197833   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:02.231709   72639 cri.go:89] found id: ""
	I1014 15:04:02.231734   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.231746   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:02.231753   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:02.231815   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:02.269259   72639 cri.go:89] found id: ""
	I1014 15:04:02.269291   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.269303   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:02.269311   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:02.269390   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:02.305926   72639 cri.go:89] found id: ""
	I1014 15:04:02.305956   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.305967   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:02.305975   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:02.306034   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:02.349516   72639 cri.go:89] found id: ""
	I1014 15:04:02.349544   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.349557   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:02.349563   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:02.349622   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:02.388334   72639 cri.go:89] found id: ""
	I1014 15:04:02.388361   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.388371   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:02.388376   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:02.388428   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:02.422742   72639 cri.go:89] found id: ""
	I1014 15:04:02.422770   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.422781   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:02.422789   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:02.422850   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:02.463686   72639 cri.go:89] found id: ""
	I1014 15:04:02.463710   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.463718   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:02.463724   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:02.463770   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:02.498352   72639 cri.go:89] found id: ""
	I1014 15:04:02.498383   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.498394   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:02.498404   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:02.498418   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:02.512531   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:02.512561   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:02.585331   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:02.585359   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:02.585373   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:02.667376   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:02.667414   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:02.708101   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:02.708133   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:00.136170   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:02.138198   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:01.566456   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:04.066934   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:02.309906   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:04.310009   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:06.310084   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:05.259839   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:05.273102   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:05.273186   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:05.311745   72639 cri.go:89] found id: ""
	I1014 15:04:05.311768   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.311776   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:05.311787   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:05.311834   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:05.349313   72639 cri.go:89] found id: ""
	I1014 15:04:05.349336   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.349344   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:05.349352   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:05.349416   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:05.388003   72639 cri.go:89] found id: ""
	I1014 15:04:05.388026   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.388034   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:05.388039   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:05.388098   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:05.426636   72639 cri.go:89] found id: ""
	I1014 15:04:05.426665   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.426676   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:05.426683   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:05.426745   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:05.461945   72639 cri.go:89] found id: ""
	I1014 15:04:05.461974   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.461983   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:05.461989   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:05.462049   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:05.497099   72639 cri.go:89] found id: ""
	I1014 15:04:05.497130   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.497142   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:05.497149   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:05.497216   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:05.531621   72639 cri.go:89] found id: ""
	I1014 15:04:05.531652   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.531664   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:05.531671   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:05.531729   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:05.568950   72639 cri.go:89] found id: ""
	I1014 15:04:05.568973   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.568983   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:05.568992   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:05.569012   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:05.624806   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:05.624846   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:05.651912   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:05.651961   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:05.740342   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:05.740369   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:05.740384   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:05.817901   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:05.817932   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:04.636643   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:07.137525   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:06.566519   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:08.567458   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:08.809718   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:10.809968   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:08.360267   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:08.373249   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:08.373325   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:08.409485   72639 cri.go:89] found id: ""
	I1014 15:04:08.409520   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.409535   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:08.409542   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:08.409604   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:08.444977   72639 cri.go:89] found id: ""
	I1014 15:04:08.445000   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.445008   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:08.445014   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:08.445061   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:08.478080   72639 cri.go:89] found id: ""
	I1014 15:04:08.478108   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.478117   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:08.478123   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:08.478169   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:08.511510   72639 cri.go:89] found id: ""
	I1014 15:04:08.511536   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.511545   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:08.511552   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:08.511603   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:08.546260   72639 cri.go:89] found id: ""
	I1014 15:04:08.546285   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.546292   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:08.546299   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:08.546347   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:08.582775   72639 cri.go:89] found id: ""
	I1014 15:04:08.582799   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.582810   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:08.582816   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:08.582875   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:08.619208   72639 cri.go:89] found id: ""
	I1014 15:04:08.619231   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.619239   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:08.619244   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:08.619299   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:08.654823   72639 cri.go:89] found id: ""
	I1014 15:04:08.654849   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.654860   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:08.654870   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:08.654885   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:08.704543   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:08.704574   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:08.718111   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:08.718144   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:08.792267   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:08.792290   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:08.792309   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:08.870178   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:08.870210   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:11.409975   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:11.432171   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:11.432243   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:11.468997   72639 cri.go:89] found id: ""
	I1014 15:04:11.469021   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.469030   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:11.469035   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:11.469094   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:11.504312   72639 cri.go:89] found id: ""
	I1014 15:04:11.504337   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.504346   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:11.504354   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:11.504417   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:11.540628   72639 cri.go:89] found id: ""
	I1014 15:04:11.540654   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.540662   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:11.540667   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:11.540729   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:11.576466   72639 cri.go:89] found id: ""
	I1014 15:04:11.576491   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.576498   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:11.576506   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:11.576550   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:11.611466   72639 cri.go:89] found id: ""
	I1014 15:04:11.611501   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.611512   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:11.611519   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:11.611578   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:11.650089   72639 cri.go:89] found id: ""
	I1014 15:04:11.650116   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.650126   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:11.650133   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:11.650191   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:11.686538   72639 cri.go:89] found id: ""
	I1014 15:04:11.686563   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.686571   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:11.686577   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:11.686654   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:11.725494   72639 cri.go:89] found id: ""
	I1014 15:04:11.725517   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.725524   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:11.725532   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:11.725545   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:11.779062   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:11.779102   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:11.792726   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:11.792753   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:11.867945   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:11.867972   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:11.867986   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:11.952299   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:11.952340   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:09.636140   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:11.636455   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:14.136183   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:10.567626   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:13.065875   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:15.066484   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:13.310523   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:15.811094   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:14.493922   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:14.506754   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:14.506817   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:14.540456   72639 cri.go:89] found id: ""
	I1014 15:04:14.540480   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.540489   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:14.540495   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:14.540545   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:14.574819   72639 cri.go:89] found id: ""
	I1014 15:04:14.574843   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.574853   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:14.574859   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:14.574917   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:14.608834   72639 cri.go:89] found id: ""
	I1014 15:04:14.608859   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.608868   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:14.608873   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:14.608920   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:14.644182   72639 cri.go:89] found id: ""
	I1014 15:04:14.644210   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.644218   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:14.644223   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:14.644283   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:14.679113   72639 cri.go:89] found id: ""
	I1014 15:04:14.679145   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.679156   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:14.679164   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:14.679228   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:14.716111   72639 cri.go:89] found id: ""
	I1014 15:04:14.716142   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.716154   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:14.716167   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:14.716220   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:14.755884   72639 cri.go:89] found id: ""
	I1014 15:04:14.755907   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.755915   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:14.755920   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:14.755968   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:14.794167   72639 cri.go:89] found id: ""
	I1014 15:04:14.794195   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.794207   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:14.794217   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:14.794234   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:14.844828   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:14.844864   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:14.859424   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:14.859451   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:14.936660   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:14.936687   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:14.936703   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:15.017034   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:15.017070   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:17.555604   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:17.570628   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:17.570687   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:17.612919   72639 cri.go:89] found id: ""
	I1014 15:04:17.612943   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.612951   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:17.612956   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:17.613002   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:17.651178   72639 cri.go:89] found id: ""
	I1014 15:04:17.651210   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.651220   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:17.651226   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:17.651278   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:17.687923   72639 cri.go:89] found id: ""
	I1014 15:04:17.687955   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.687966   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:17.687973   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:17.688024   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:17.724759   72639 cri.go:89] found id: ""
	I1014 15:04:17.724790   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.724800   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:17.724807   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:17.724866   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:17.760189   72639 cri.go:89] found id: ""
	I1014 15:04:17.760212   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.760220   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:17.760226   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:17.760274   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:17.797517   72639 cri.go:89] found id: ""
	I1014 15:04:17.797541   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.797549   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:17.797554   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:17.797601   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:17.833238   72639 cri.go:89] found id: ""
	I1014 15:04:17.833261   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.833270   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:17.833275   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:17.833321   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:17.868828   72639 cri.go:89] found id: ""
	I1014 15:04:17.868857   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.868865   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:17.868873   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:17.868883   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:17.956972   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:17.957011   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:16.137357   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:18.636865   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:17.067415   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:19.566146   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:18.310380   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:20.809526   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:18.006354   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:18.006390   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:18.056237   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:18.056271   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:18.070763   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:18.070792   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:18.147471   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:20.648238   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:20.661465   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:20.661534   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:20.695869   72639 cri.go:89] found id: ""
	I1014 15:04:20.695894   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.695902   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:20.695907   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:20.695957   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:20.729271   72639 cri.go:89] found id: ""
	I1014 15:04:20.729295   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.729313   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:20.729319   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:20.729364   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:20.767110   72639 cri.go:89] found id: ""
	I1014 15:04:20.767137   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.767147   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:20.767154   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:20.767209   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:20.802752   72639 cri.go:89] found id: ""
	I1014 15:04:20.802781   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.802791   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:20.802798   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:20.802846   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:20.841958   72639 cri.go:89] found id: ""
	I1014 15:04:20.841987   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.841998   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:20.842005   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:20.842066   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:20.878869   72639 cri.go:89] found id: ""
	I1014 15:04:20.878896   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.878907   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:20.878914   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:20.878974   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:20.913802   72639 cri.go:89] found id: ""
	I1014 15:04:20.913838   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.913852   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:20.913861   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:20.913922   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:20.948350   72639 cri.go:89] found id: ""
	I1014 15:04:20.948378   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.948395   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:20.948403   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:20.948416   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:21.001065   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:21.001098   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:21.014427   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:21.014458   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:21.091386   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:21.091412   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:21.091432   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:21.175255   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:21.175299   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:21.137358   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:23.636623   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:22.066415   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:24.066476   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:22.809925   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:25.309528   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:23.718260   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:23.732366   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:23.732445   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:23.767269   72639 cri.go:89] found id: ""
	I1014 15:04:23.767299   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.767311   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:23.767317   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:23.767379   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:23.808502   72639 cri.go:89] found id: ""
	I1014 15:04:23.808532   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.808543   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:23.808550   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:23.808606   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:23.845632   72639 cri.go:89] found id: ""
	I1014 15:04:23.845664   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.845677   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:23.845685   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:23.845753   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:23.880218   72639 cri.go:89] found id: ""
	I1014 15:04:23.880249   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.880261   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:23.880268   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:23.880332   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:23.915674   72639 cri.go:89] found id: ""
	I1014 15:04:23.915697   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.915705   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:23.915710   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:23.915767   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:23.950526   72639 cri.go:89] found id: ""
	I1014 15:04:23.950559   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.950570   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:23.950578   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:23.950656   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:23.986130   72639 cri.go:89] found id: ""
	I1014 15:04:23.986167   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.986178   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:23.986186   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:23.986246   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:24.027112   72639 cri.go:89] found id: ""
	I1014 15:04:24.027141   72639 logs.go:282] 0 containers: []
	W1014 15:04:24.027154   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:24.027165   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:24.027181   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:24.082559   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:24.082610   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:24.096900   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:24.096929   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:24.173293   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:24.173327   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:24.173341   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:24.256921   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:24.256962   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:26.802073   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:26.817307   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:26.817366   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:26.855777   72639 cri.go:89] found id: ""
	I1014 15:04:26.855805   72639 logs.go:282] 0 containers: []
	W1014 15:04:26.855817   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:26.855825   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:26.855876   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:26.892260   72639 cri.go:89] found id: ""
	I1014 15:04:26.892288   72639 logs.go:282] 0 containers: []
	W1014 15:04:26.892300   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:26.892308   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:26.892369   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:26.931066   72639 cri.go:89] found id: ""
	I1014 15:04:26.931103   72639 logs.go:282] 0 containers: []
	W1014 15:04:26.931114   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:26.931122   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:26.931174   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:26.966890   72639 cri.go:89] found id: ""
	I1014 15:04:26.966923   72639 logs.go:282] 0 containers: []
	W1014 15:04:26.966933   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:26.966941   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:26.967002   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:27.001338   72639 cri.go:89] found id: ""
	I1014 15:04:27.001368   72639 logs.go:282] 0 containers: []
	W1014 15:04:27.001379   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:27.001386   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:27.001454   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:27.041798   72639 cri.go:89] found id: ""
	I1014 15:04:27.041830   72639 logs.go:282] 0 containers: []
	W1014 15:04:27.041839   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:27.041844   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:27.041905   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:27.080248   72639 cri.go:89] found id: ""
	I1014 15:04:27.080279   72639 logs.go:282] 0 containers: []
	W1014 15:04:27.080288   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:27.080293   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:27.080341   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:27.116207   72639 cri.go:89] found id: ""
	I1014 15:04:27.116234   72639 logs.go:282] 0 containers: []
	W1014 15:04:27.116242   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:27.116250   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:27.116264   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:27.191149   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:27.191174   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:27.191203   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:27.275771   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:27.275808   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:27.323223   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:27.323254   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:27.375409   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:27.375455   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:26.137156   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:28.637895   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:26.066790   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:28.565208   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:27.810315   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:30.309211   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:29.890408   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:29.904797   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:29.904853   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:29.938655   72639 cri.go:89] found id: ""
	I1014 15:04:29.938685   72639 logs.go:282] 0 containers: []
	W1014 15:04:29.938698   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:29.938705   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:29.938765   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:29.976477   72639 cri.go:89] found id: ""
	I1014 15:04:29.976508   72639 logs.go:282] 0 containers: []
	W1014 15:04:29.976519   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:29.976526   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:29.976583   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:30.014813   72639 cri.go:89] found id: ""
	I1014 15:04:30.014842   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.014853   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:30.014860   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:30.014926   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:30.050804   72639 cri.go:89] found id: ""
	I1014 15:04:30.050833   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.050844   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:30.050854   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:30.050918   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:30.087921   72639 cri.go:89] found id: ""
	I1014 15:04:30.087946   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.087954   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:30.087959   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:30.088016   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:30.125411   72639 cri.go:89] found id: ""
	I1014 15:04:30.125446   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.125458   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:30.125465   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:30.125519   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:30.162067   72639 cri.go:89] found id: ""
	I1014 15:04:30.162099   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.162110   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:30.162118   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:30.162181   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:30.200376   72639 cri.go:89] found id: ""
	I1014 15:04:30.200406   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.200418   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:30.200435   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:30.200451   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:30.279965   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:30.279992   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:30.280007   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:30.364866   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:30.364900   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:30.408808   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:30.408842   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:30.464473   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:30.464507   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:32.980254   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:32.994254   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:32.994320   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:31.136531   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:33.137201   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:30.566228   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:32.567393   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:35.065955   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:32.810349   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:35.308794   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:33.035996   72639 cri.go:89] found id: ""
	I1014 15:04:33.036025   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.036036   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:33.036043   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:33.036103   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:33.077494   72639 cri.go:89] found id: ""
	I1014 15:04:33.077522   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.077531   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:33.077538   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:33.077585   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:33.112666   72639 cri.go:89] found id: ""
	I1014 15:04:33.112695   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.112705   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:33.112711   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:33.112772   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:33.150229   72639 cri.go:89] found id: ""
	I1014 15:04:33.150266   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.150276   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:33.150282   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:33.150336   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:33.186960   72639 cri.go:89] found id: ""
	I1014 15:04:33.186989   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.187001   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:33.187008   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:33.187062   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:33.223596   72639 cri.go:89] found id: ""
	I1014 15:04:33.223631   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.223641   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:33.223647   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:33.223711   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:33.260137   72639 cri.go:89] found id: ""
	I1014 15:04:33.260162   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.260170   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:33.260175   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:33.260228   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:33.298072   72639 cri.go:89] found id: ""
	I1014 15:04:33.298095   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.298103   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:33.298110   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:33.298121   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:33.379587   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:33.379623   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:33.423427   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:33.423456   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:33.474644   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:33.474683   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:33.488324   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:33.488354   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:33.556257   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:36.056955   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:36.072461   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:36.072536   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:36.109467   72639 cri.go:89] found id: ""
	I1014 15:04:36.109493   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.109502   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:36.109509   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:36.109561   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:36.147985   72639 cri.go:89] found id: ""
	I1014 15:04:36.148012   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.148020   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:36.148025   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:36.148071   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:36.183885   72639 cri.go:89] found id: ""
	I1014 15:04:36.183906   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.183914   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:36.183919   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:36.183968   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:36.220994   72639 cri.go:89] found id: ""
	I1014 15:04:36.221025   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.221036   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:36.221044   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:36.221108   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:36.256586   72639 cri.go:89] found id: ""
	I1014 15:04:36.256612   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.256621   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:36.256627   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:36.256683   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:36.293229   72639 cri.go:89] found id: ""
	I1014 15:04:36.293256   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.293265   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:36.293272   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:36.293339   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:36.329254   72639 cri.go:89] found id: ""
	I1014 15:04:36.329279   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.329290   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:36.329297   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:36.329357   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:36.366495   72639 cri.go:89] found id: ""
	I1014 15:04:36.366526   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.366538   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:36.366548   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:36.366561   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:36.420985   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:36.421018   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:36.435532   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:36.435565   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:36.510459   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:36.510484   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:36.510499   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:36.593057   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:36.593094   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:35.637182   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:37.637348   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:37.066334   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:39.566950   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:37.309629   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:39.809500   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:39.138570   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:39.152280   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:39.152342   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:39.186647   72639 cri.go:89] found id: ""
	I1014 15:04:39.186676   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.186687   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:39.186694   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:39.186754   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:39.223560   72639 cri.go:89] found id: ""
	I1014 15:04:39.223586   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.223594   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:39.223599   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:39.223644   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:39.257835   72639 cri.go:89] found id: ""
	I1014 15:04:39.257867   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.257879   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:39.257886   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:39.257947   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:39.294656   72639 cri.go:89] found id: ""
	I1014 15:04:39.294684   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.294692   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:39.294699   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:39.294750   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:39.333474   72639 cri.go:89] found id: ""
	I1014 15:04:39.333503   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.333513   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:39.333520   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:39.333586   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:39.374385   72639 cri.go:89] found id: ""
	I1014 15:04:39.374414   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.374424   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:39.374435   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:39.374483   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:39.412856   72639 cri.go:89] found id: ""
	I1014 15:04:39.412888   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.412899   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:39.412906   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:39.412966   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:39.463087   72639 cri.go:89] found id: ""
	I1014 15:04:39.463115   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.463127   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:39.463138   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:39.463154   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:39.514309   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:39.514342   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:39.528947   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:39.528972   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:39.603984   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:39.604004   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:39.604016   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:39.685053   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:39.685093   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:42.234178   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:42.247421   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:42.247497   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:42.288496   72639 cri.go:89] found id: ""
	I1014 15:04:42.288521   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.288529   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:42.288535   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:42.288588   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:42.324346   72639 cri.go:89] found id: ""
	I1014 15:04:42.324382   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.324394   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:42.324401   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:42.324469   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:42.362879   72639 cri.go:89] found id: ""
	I1014 15:04:42.362910   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.362922   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:42.362928   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:42.362991   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:42.399347   72639 cri.go:89] found id: ""
	I1014 15:04:42.399375   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.399383   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:42.399389   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:42.399473   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:42.434942   72639 cri.go:89] found id: ""
	I1014 15:04:42.434971   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.434990   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:42.434999   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:42.435063   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:42.470886   72639 cri.go:89] found id: ""
	I1014 15:04:42.470916   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.470928   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:42.470934   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:42.470994   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:42.510713   72639 cri.go:89] found id: ""
	I1014 15:04:42.510742   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.510752   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:42.510758   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:42.510820   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:42.544506   72639 cri.go:89] found id: ""
	I1014 15:04:42.544538   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.544547   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:42.544559   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:42.544570   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:42.588658   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:42.588694   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:42.642165   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:42.642198   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:42.658073   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:42.658110   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:42.730486   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:42.730510   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:42.730524   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:39.637476   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:41.637715   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:44.137654   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:42.065534   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:44.066309   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:41.809932   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:44.309377   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:46.309699   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:45.307806   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:45.321664   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:45.321733   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:45.359670   72639 cri.go:89] found id: ""
	I1014 15:04:45.359697   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.359708   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:45.359715   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:45.359781   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:45.398673   72639 cri.go:89] found id: ""
	I1014 15:04:45.398703   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.398715   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:45.398722   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:45.398784   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:45.441656   72639 cri.go:89] found id: ""
	I1014 15:04:45.441685   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.441697   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:45.441705   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:45.441768   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:45.476159   72639 cri.go:89] found id: ""
	I1014 15:04:45.476188   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.476195   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:45.476201   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:45.476263   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:45.513776   72639 cri.go:89] found id: ""
	I1014 15:04:45.513807   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.513819   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:45.513828   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:45.513894   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:45.550336   72639 cri.go:89] found id: ""
	I1014 15:04:45.550371   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.550382   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:45.550388   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:45.550450   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:45.586668   72639 cri.go:89] found id: ""
	I1014 15:04:45.586697   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.586705   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:45.586711   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:45.586760   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:45.622530   72639 cri.go:89] found id: ""
	I1014 15:04:45.622559   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.622568   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:45.622576   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:45.622589   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:45.674471   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:45.674504   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:45.690430   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:45.690463   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:45.772133   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:45.772165   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:45.772181   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:45.859835   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:45.859880   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:46.636239   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:48.637696   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:46.565440   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:48.569076   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:48.309788   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:50.310209   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:48.434011   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:48.448747   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:48.448826   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:48.493642   72639 cri.go:89] found id: ""
	I1014 15:04:48.493668   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.493680   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:48.493687   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:48.493747   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:48.530298   72639 cri.go:89] found id: ""
	I1014 15:04:48.530327   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.530336   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:48.530344   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:48.530403   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:48.566215   72639 cri.go:89] found id: ""
	I1014 15:04:48.566242   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.566252   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:48.566261   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:48.566325   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:48.604528   72639 cri.go:89] found id: ""
	I1014 15:04:48.604553   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.604561   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:48.604566   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:48.604616   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:48.646152   72639 cri.go:89] found id: ""
	I1014 15:04:48.646180   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.646191   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:48.646198   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:48.646257   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:48.682670   72639 cri.go:89] found id: ""
	I1014 15:04:48.682696   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.682704   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:48.682711   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:48.682762   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:48.722292   72639 cri.go:89] found id: ""
	I1014 15:04:48.722318   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.722326   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:48.722335   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:48.722400   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:48.762474   72639 cri.go:89] found id: ""
	I1014 15:04:48.762506   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.762518   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:48.762528   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:48.762553   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:48.776628   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:48.776652   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:48.849904   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:48.849928   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:48.849941   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:48.927033   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:48.927068   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:48.970775   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:48.970807   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:51.521113   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:51.535318   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:51.535389   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:51.582631   72639 cri.go:89] found id: ""
	I1014 15:04:51.582658   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.582666   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:51.582671   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:51.582721   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:51.655323   72639 cri.go:89] found id: ""
	I1014 15:04:51.655362   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.655371   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:51.655376   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:51.655433   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:51.722837   72639 cri.go:89] found id: ""
	I1014 15:04:51.722863   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.722875   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:51.722882   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:51.722939   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:51.759917   72639 cri.go:89] found id: ""
	I1014 15:04:51.759946   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.759957   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:51.759963   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:51.760023   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:51.798656   72639 cri.go:89] found id: ""
	I1014 15:04:51.798689   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.798702   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:51.798711   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:51.798777   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:51.839285   72639 cri.go:89] found id: ""
	I1014 15:04:51.839312   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.839324   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:51.839334   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:51.839391   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:51.876997   72639 cri.go:89] found id: ""
	I1014 15:04:51.877028   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.877038   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:51.877045   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:51.877091   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:51.913991   72639 cri.go:89] found id: ""
	I1014 15:04:51.914020   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.914028   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:51.914036   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:51.914046   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:51.993392   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:51.993427   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:52.039722   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:52.039756   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:52.090901   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:52.090937   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:52.105014   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:52.105052   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:52.175505   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:51.137343   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:53.636660   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:50.575054   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:53.067208   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:52.809933   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:54.810498   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:54.676549   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:54.690113   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:54.690204   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:54.726478   72639 cri.go:89] found id: ""
	I1014 15:04:54.726511   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.726523   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:54.726538   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:54.726611   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:54.764990   72639 cri.go:89] found id: ""
	I1014 15:04:54.765017   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.765025   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:54.765031   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:54.765095   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:54.804779   72639 cri.go:89] found id: ""
	I1014 15:04:54.804808   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.804819   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:54.804828   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:54.804886   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:54.848657   72639 cri.go:89] found id: ""
	I1014 15:04:54.848682   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.848698   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:54.848705   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:54.848765   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:54.886806   72639 cri.go:89] found id: ""
	I1014 15:04:54.886834   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.886845   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:54.886853   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:54.886912   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:54.923297   72639 cri.go:89] found id: ""
	I1014 15:04:54.923323   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.923330   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:54.923335   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:54.923380   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:54.966297   72639 cri.go:89] found id: ""
	I1014 15:04:54.966321   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.966329   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:54.966334   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:54.966382   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:55.012047   72639 cri.go:89] found id: ""
	I1014 15:04:55.012071   72639 logs.go:282] 0 containers: []
	W1014 15:04:55.012079   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:55.012087   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:55.012097   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:55.066031   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:55.066063   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:55.080954   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:55.080981   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:55.159644   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:55.159670   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:55.159683   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:55.243303   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:55.243341   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:57.784555   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:57.799051   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:57.799132   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:57.841084   72639 cri.go:89] found id: ""
	I1014 15:04:57.841108   72639 logs.go:282] 0 containers: []
	W1014 15:04:57.841115   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:57.841121   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:57.841167   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:57.881510   72639 cri.go:89] found id: ""
	I1014 15:04:57.881542   72639 logs.go:282] 0 containers: []
	W1014 15:04:57.881555   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:57.881562   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:57.881624   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:57.916893   72639 cri.go:89] found id: ""
	I1014 15:04:57.916923   72639 logs.go:282] 0 containers: []
	W1014 15:04:57.916934   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:57.916940   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:57.916988   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:57.956991   72639 cri.go:89] found id: ""
	I1014 15:04:57.957023   72639 logs.go:282] 0 containers: []
	W1014 15:04:57.957036   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:57.957046   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:57.957118   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:57.993765   72639 cri.go:89] found id: ""
	I1014 15:04:57.993792   72639 logs.go:282] 0 containers: []
	W1014 15:04:57.993803   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:57.993809   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:57.993869   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:56.136994   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:58.137736   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:55.566021   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:57.567950   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:00.068276   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:57.310643   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:59.808898   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:58.032044   72639 cri.go:89] found id: ""
	I1014 15:04:58.032085   72639 logs.go:282] 0 containers: []
	W1014 15:04:58.032098   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:58.032107   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:58.032173   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:58.069733   72639 cri.go:89] found id: ""
	I1014 15:04:58.069754   72639 logs.go:282] 0 containers: []
	W1014 15:04:58.069762   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:58.069767   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:58.069813   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:58.105851   72639 cri.go:89] found id: ""
	I1014 15:04:58.105880   72639 logs.go:282] 0 containers: []
	W1014 15:04:58.105891   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:58.105901   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:58.105914   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:58.159922   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:58.159956   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:58.173779   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:58.173802   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:58.253551   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:58.253576   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:58.253591   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:58.342607   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:58.342647   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:00.884705   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:00.900147   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:00.900215   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:00.940372   72639 cri.go:89] found id: ""
	I1014 15:05:00.940402   72639 logs.go:282] 0 containers: []
	W1014 15:05:00.940413   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:00.940420   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:00.940489   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:00.981400   72639 cri.go:89] found id: ""
	I1014 15:05:00.981431   72639 logs.go:282] 0 containers: []
	W1014 15:05:00.981441   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:00.981447   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:00.981517   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:01.021981   72639 cri.go:89] found id: ""
	I1014 15:05:01.022002   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.022011   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:01.022016   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:01.022067   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:01.056976   72639 cri.go:89] found id: ""
	I1014 15:05:01.057005   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.057013   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:01.057020   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:01.057063   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:01.092702   72639 cri.go:89] found id: ""
	I1014 15:05:01.092732   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.092739   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:01.092745   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:01.092803   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:01.128861   72639 cri.go:89] found id: ""
	I1014 15:05:01.128892   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.128902   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:01.128908   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:01.128958   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:01.162672   72639 cri.go:89] found id: ""
	I1014 15:05:01.162702   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.162712   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:01.162719   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:01.162791   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:01.202724   72639 cri.go:89] found id: ""
	I1014 15:05:01.202751   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.202761   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:01.202770   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:01.202785   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:01.280702   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:01.280723   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:01.280735   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:01.362909   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:01.362943   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:01.406737   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:01.406766   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:01.460090   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:01.460125   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:00.636730   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:03.136587   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:02.568415   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:05.066568   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:01.809661   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:04.309079   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:06.309544   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:03.975661   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:03.989811   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:03.989874   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:04.028396   72639 cri.go:89] found id: ""
	I1014 15:05:04.028426   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.028438   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:04.028445   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:04.028499   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:04.065871   72639 cri.go:89] found id: ""
	I1014 15:05:04.065901   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.065912   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:04.065919   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:04.065980   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:04.103155   72639 cri.go:89] found id: ""
	I1014 15:05:04.103184   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.103192   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:04.103198   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:04.103248   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:04.139503   72639 cri.go:89] found id: ""
	I1014 15:05:04.139531   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.139539   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:04.139545   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:04.139601   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:04.171638   72639 cri.go:89] found id: ""
	I1014 15:05:04.171663   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.171671   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:04.171676   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:04.171734   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:04.213720   72639 cri.go:89] found id: ""
	I1014 15:05:04.213751   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.213760   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:04.213766   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:04.213815   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:04.248088   72639 cri.go:89] found id: ""
	I1014 15:05:04.248109   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.248117   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:04.248121   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:04.248183   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:04.286454   72639 cri.go:89] found id: ""
	I1014 15:05:04.286479   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.286487   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:04.286495   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:04.286506   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:04.339564   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:04.339599   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:04.353034   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:04.353061   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:04.432764   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:04.432786   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:04.432797   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:04.514561   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:04.514613   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:07.057507   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:07.072798   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:07.072873   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:07.113672   72639 cri.go:89] found id: ""
	I1014 15:05:07.113694   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.113701   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:07.113706   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:07.113761   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:07.149321   72639 cri.go:89] found id: ""
	I1014 15:05:07.149348   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.149357   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:07.149362   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:07.149416   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:07.185717   72639 cri.go:89] found id: ""
	I1014 15:05:07.185748   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.185760   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:07.185768   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:07.185822   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:07.225747   72639 cri.go:89] found id: ""
	I1014 15:05:07.225772   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.225783   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:07.225791   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:07.225843   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:07.265834   72639 cri.go:89] found id: ""
	I1014 15:05:07.265864   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.265875   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:07.265882   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:07.265944   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:07.300595   72639 cri.go:89] found id: ""
	I1014 15:05:07.300622   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.300631   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:07.300637   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:07.300686   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:07.343249   72639 cri.go:89] found id: ""
	I1014 15:05:07.343280   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.343291   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:07.343298   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:07.343365   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:07.379525   72639 cri.go:89] found id: ""
	I1014 15:05:07.379549   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.379557   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:07.379564   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:07.379576   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:07.393622   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:07.393653   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:07.473973   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:07.473998   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:07.474013   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:07.556937   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:07.556971   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:07.602224   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:07.602249   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:05.137157   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:07.137297   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:09.137708   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:07.066795   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:09.566723   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:08.809562   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:11.309821   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:10.156920   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:10.170971   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:10.171037   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:10.206568   72639 cri.go:89] found id: ""
	I1014 15:05:10.206610   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.206623   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:10.206630   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:10.206689   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:10.249075   72639 cri.go:89] found id: ""
	I1014 15:05:10.249101   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.249110   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:10.249121   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:10.249175   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:10.285620   72639 cri.go:89] found id: ""
	I1014 15:05:10.285649   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.285660   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:10.285667   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:10.285730   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:10.322291   72639 cri.go:89] found id: ""
	I1014 15:05:10.322314   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.322322   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:10.322327   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:10.322379   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:10.356691   72639 cri.go:89] found id: ""
	I1014 15:05:10.356720   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.356730   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:10.356738   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:10.356802   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:10.401192   72639 cri.go:89] found id: ""
	I1014 15:05:10.401223   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.401234   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:10.401242   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:10.401303   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:10.438198   72639 cri.go:89] found id: ""
	I1014 15:05:10.438225   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.438236   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:10.438243   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:10.438380   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:10.474142   72639 cri.go:89] found id: ""
	I1014 15:05:10.474166   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.474174   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:10.474181   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:10.474193   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:10.546549   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:10.546569   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:10.546582   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:10.624235   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:10.624268   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:10.664896   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:10.664926   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:10.719425   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:10.719464   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:11.637824   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:14.139552   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:11.566806   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:14.066803   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:13.809728   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:16.310153   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:13.234162   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:13.247614   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:13.247689   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:13.285040   72639 cri.go:89] found id: ""
	I1014 15:05:13.285068   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.285078   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:13.285086   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:13.285154   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:13.334084   72639 cri.go:89] found id: ""
	I1014 15:05:13.334125   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.334133   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:13.334139   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:13.334204   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:13.369164   72639 cri.go:89] found id: ""
	I1014 15:05:13.369199   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.369211   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:13.369223   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:13.369285   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:13.405202   72639 cri.go:89] found id: ""
	I1014 15:05:13.405232   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.405244   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:13.405252   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:13.405304   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:13.443271   72639 cri.go:89] found id: ""
	I1014 15:05:13.443302   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.443311   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:13.443317   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:13.443369   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:13.483541   72639 cri.go:89] found id: ""
	I1014 15:05:13.483570   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.483580   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:13.483588   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:13.483650   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:13.518580   72639 cri.go:89] found id: ""
	I1014 15:05:13.518622   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.518633   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:13.518641   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:13.518701   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:13.553638   72639 cri.go:89] found id: ""
	I1014 15:05:13.553668   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.553678   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:13.553688   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:13.553702   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:13.605379   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:13.605413   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:13.620525   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:13.620556   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:13.699628   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:13.699658   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:13.699672   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:13.778006   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:13.778046   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:16.316703   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:16.331511   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:16.331577   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:16.367045   72639 cri.go:89] found id: ""
	I1014 15:05:16.367075   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.367083   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:16.367089   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:16.367144   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:16.403240   72639 cri.go:89] found id: ""
	I1014 15:05:16.403264   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.403274   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:16.403285   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:16.403344   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:16.438570   72639 cri.go:89] found id: ""
	I1014 15:05:16.438612   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.438625   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:16.438632   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:16.438694   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:16.477153   72639 cri.go:89] found id: ""
	I1014 15:05:16.477174   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.477182   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:16.477187   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:16.477232   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:16.516308   72639 cri.go:89] found id: ""
	I1014 15:05:16.516336   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.516348   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:16.516355   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:16.516421   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:16.551337   72639 cri.go:89] found id: ""
	I1014 15:05:16.551365   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.551375   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:16.551383   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:16.551450   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:16.587073   72639 cri.go:89] found id: ""
	I1014 15:05:16.587105   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.587117   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:16.587125   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:16.587183   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:16.623940   72639 cri.go:89] found id: ""
	I1014 15:05:16.623962   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.623970   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:16.623978   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:16.623989   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:16.671593   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:16.671618   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:16.723057   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:16.723092   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:16.737623   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:16.737656   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:16.809539   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:16.809569   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:16.809592   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:16.636818   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:18.637340   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:16.566523   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:19.065985   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:18.809554   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:21.309691   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:19.390406   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:19.404850   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:19.404928   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:19.446931   72639 cri.go:89] found id: ""
	I1014 15:05:19.446962   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.446973   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:19.446980   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:19.447043   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:19.488112   72639 cri.go:89] found id: ""
	I1014 15:05:19.488136   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.488144   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:19.488150   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:19.488208   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:19.523333   72639 cri.go:89] found id: ""
	I1014 15:05:19.523365   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.523382   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:19.523389   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:19.523447   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:19.557887   72639 cri.go:89] found id: ""
	I1014 15:05:19.557910   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.557918   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:19.557927   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:19.557972   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:19.593792   72639 cri.go:89] found id: ""
	I1014 15:05:19.593815   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.593822   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:19.593873   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:19.593922   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:19.628291   72639 cri.go:89] found id: ""
	I1014 15:05:19.628324   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.628335   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:19.628343   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:19.628405   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:19.664088   72639 cri.go:89] found id: ""
	I1014 15:05:19.664118   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.664130   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:19.664138   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:19.664211   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:19.700825   72639 cri.go:89] found id: ""
	I1014 15:05:19.700853   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.700863   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:19.700873   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:19.700886   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:19.741631   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:19.741666   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:19.792667   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:19.792706   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:19.806928   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:19.806965   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:19.880030   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:19.880059   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:19.880073   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:22.465251   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:22.479031   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:22.479096   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:22.519123   72639 cri.go:89] found id: ""
	I1014 15:05:22.519147   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.519158   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:22.519171   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:22.519235   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:22.552250   72639 cri.go:89] found id: ""
	I1014 15:05:22.552277   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.552287   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:22.552294   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:22.552354   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:22.594213   72639 cri.go:89] found id: ""
	I1014 15:05:22.594243   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.594253   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:22.594261   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:22.594310   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:22.630081   72639 cri.go:89] found id: ""
	I1014 15:05:22.630110   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.630121   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:22.630129   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:22.630195   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:22.665454   72639 cri.go:89] found id: ""
	I1014 15:05:22.665485   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.665497   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:22.665505   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:22.665568   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:22.710697   72639 cri.go:89] found id: ""
	I1014 15:05:22.710725   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.710734   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:22.710742   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:22.710798   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:22.748486   72639 cri.go:89] found id: ""
	I1014 15:05:22.748516   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.748527   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:22.748534   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:22.748594   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:22.784646   72639 cri.go:89] found id: ""
	I1014 15:05:22.784674   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.784684   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:22.784695   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:22.784709   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:22.797853   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:22.797880   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:22.875382   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:22.875406   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:22.875422   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:22.957055   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:22.957089   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:20.638448   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:23.137051   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:21.066950   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:23.566775   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:23.309958   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:25.810168   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:23.008642   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:23.008672   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:25.561277   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:25.575543   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:25.575606   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:25.614260   72639 cri.go:89] found id: ""
	I1014 15:05:25.614283   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.614291   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:25.614296   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:25.614353   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:25.654267   72639 cri.go:89] found id: ""
	I1014 15:05:25.654295   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.654307   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:25.654314   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:25.654385   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:25.707597   72639 cri.go:89] found id: ""
	I1014 15:05:25.707626   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.707637   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:25.707644   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:25.707707   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:25.747477   72639 cri.go:89] found id: ""
	I1014 15:05:25.747500   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.747508   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:25.747513   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:25.747571   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:25.785245   72639 cri.go:89] found id: ""
	I1014 15:05:25.785270   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.785279   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:25.785288   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:25.785342   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:25.820619   72639 cri.go:89] found id: ""
	I1014 15:05:25.820643   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.820651   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:25.820665   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:25.820722   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:25.861644   72639 cri.go:89] found id: ""
	I1014 15:05:25.861665   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.861673   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:25.861678   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:25.861724   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:25.901009   72639 cri.go:89] found id: ""
	I1014 15:05:25.901032   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.901046   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:25.901056   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:25.901068   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:25.942918   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:25.942941   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:25.993931   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:25.993964   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:26.008252   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:26.008280   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:26.087316   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:26.087336   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:26.087347   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:25.636727   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:27.637053   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:26.066529   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:28.567224   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:28.308855   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:30.811310   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:28.667377   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:28.682586   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:28.682682   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:28.729576   72639 cri.go:89] found id: ""
	I1014 15:05:28.729600   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.729608   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:28.729614   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:28.729673   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:28.766637   72639 cri.go:89] found id: ""
	I1014 15:05:28.766669   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.766682   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:28.766690   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:28.766762   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:28.802280   72639 cri.go:89] found id: ""
	I1014 15:05:28.802308   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.802317   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:28.802322   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:28.802395   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:28.840788   72639 cri.go:89] found id: ""
	I1014 15:05:28.840822   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.840833   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:28.840841   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:28.840898   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:28.878403   72639 cri.go:89] found id: ""
	I1014 15:05:28.878437   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.878447   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:28.878453   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:28.878505   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:28.919054   72639 cri.go:89] found id: ""
	I1014 15:05:28.919082   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.919090   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:28.919096   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:28.919146   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:28.955097   72639 cri.go:89] found id: ""
	I1014 15:05:28.955124   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.955134   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:28.955142   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:28.955214   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:28.995681   72639 cri.go:89] found id: ""
	I1014 15:05:28.995711   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.995722   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:28.995731   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:28.995746   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:29.073041   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:29.073066   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:29.073083   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:29.152803   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:29.152838   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:29.192205   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:29.192239   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:29.248128   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:29.248166   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:31.762647   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:31.776372   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:31.776454   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:31.812234   72639 cri.go:89] found id: ""
	I1014 15:05:31.812259   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.812268   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:31.812275   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:31.812347   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:31.850248   72639 cri.go:89] found id: ""
	I1014 15:05:31.850277   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.850294   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:31.850301   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:31.850363   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:31.887768   72639 cri.go:89] found id: ""
	I1014 15:05:31.887796   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.887808   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:31.887816   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:31.887870   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:31.923434   72639 cri.go:89] found id: ""
	I1014 15:05:31.923464   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.923476   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:31.923483   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:31.923547   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:31.961027   72639 cri.go:89] found id: ""
	I1014 15:05:31.961055   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.961066   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:31.961073   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:31.961135   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:31.996222   72639 cri.go:89] found id: ""
	I1014 15:05:31.996250   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.996260   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:31.996267   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:31.996329   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:32.034396   72639 cri.go:89] found id: ""
	I1014 15:05:32.034441   72639 logs.go:282] 0 containers: []
	W1014 15:05:32.034452   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:32.034460   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:32.034528   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:32.080105   72639 cri.go:89] found id: ""
	I1014 15:05:32.080142   72639 logs.go:282] 0 containers: []
	W1014 15:05:32.080153   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:32.080164   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:32.080178   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:32.161120   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:32.161151   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:32.213511   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:32.213546   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:32.271250   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:32.271287   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:32.285452   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:32.285483   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:32.366108   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:30.136896   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:32.138906   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:31.066229   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:33.066370   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:35.067821   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:33.309846   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:35.310713   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:34.867317   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:34.882058   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:34.882125   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:34.926220   72639 cri.go:89] found id: ""
	I1014 15:05:34.926251   72639 logs.go:282] 0 containers: []
	W1014 15:05:34.926261   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:34.926268   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:34.926341   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:34.965657   72639 cri.go:89] found id: ""
	I1014 15:05:34.965691   72639 logs.go:282] 0 containers: []
	W1014 15:05:34.965702   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:34.965709   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:34.965775   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:35.002422   72639 cri.go:89] found id: ""
	I1014 15:05:35.002446   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.002454   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:35.002459   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:35.002523   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:35.040029   72639 cri.go:89] found id: ""
	I1014 15:05:35.040057   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.040067   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:35.040073   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:35.040137   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:35.077041   72639 cri.go:89] found id: ""
	I1014 15:05:35.077067   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.077075   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:35.077080   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:35.077129   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:35.113723   72639 cri.go:89] found id: ""
	I1014 15:05:35.113754   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.113763   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:35.113770   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:35.113854   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:35.152003   72639 cri.go:89] found id: ""
	I1014 15:05:35.152025   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.152033   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:35.152038   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:35.152084   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:35.186707   72639 cri.go:89] found id: ""
	I1014 15:05:35.186735   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.186746   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:35.186756   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:35.186769   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:35.267899   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:35.267941   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:35.310382   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:35.310414   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:35.364811   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:35.364852   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:35.378359   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:35.378386   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:35.453522   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:37.953807   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:37.967515   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:37.967579   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:34.637257   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:37.137643   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:37.566344   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:39.566704   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:37.810414   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:40.308798   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:38.007923   72639 cri.go:89] found id: ""
	I1014 15:05:38.007955   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.007964   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:38.007969   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:38.008023   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:38.047451   72639 cri.go:89] found id: ""
	I1014 15:05:38.047476   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.047484   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:38.047490   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:38.047542   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:38.087141   72639 cri.go:89] found id: ""
	I1014 15:05:38.087165   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.087174   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:38.087186   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:38.087234   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:38.126556   72639 cri.go:89] found id: ""
	I1014 15:05:38.126583   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.126604   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:38.126612   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:38.126670   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:38.165318   72639 cri.go:89] found id: ""
	I1014 15:05:38.165341   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.165350   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:38.165356   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:38.165400   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:38.199498   72639 cri.go:89] found id: ""
	I1014 15:05:38.199533   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.199544   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:38.199553   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:38.199618   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:38.235030   72639 cri.go:89] found id: ""
	I1014 15:05:38.235058   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.235067   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:38.235072   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:38.235129   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:38.268900   72639 cri.go:89] found id: ""
	I1014 15:05:38.268926   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.268935   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:38.268943   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:38.268957   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:38.282503   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:38.282532   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:38.357943   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:38.357972   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:38.357987   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:38.448417   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:38.448453   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:38.490023   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:38.490049   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:41.045691   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:41.061188   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:41.061251   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:41.102885   72639 cri.go:89] found id: ""
	I1014 15:05:41.102909   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.102917   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:41.102923   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:41.102971   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:41.139402   72639 cri.go:89] found id: ""
	I1014 15:05:41.139427   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.139437   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:41.139444   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:41.139501   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:41.179881   72639 cri.go:89] found id: ""
	I1014 15:05:41.179926   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.179939   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:41.179946   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:41.180008   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:41.215861   72639 cri.go:89] found id: ""
	I1014 15:05:41.215897   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.215910   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:41.215919   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:41.215987   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:41.251314   72639 cri.go:89] found id: ""
	I1014 15:05:41.251341   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.251351   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:41.251355   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:41.251404   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:41.285986   72639 cri.go:89] found id: ""
	I1014 15:05:41.286010   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.286017   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:41.286025   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:41.286071   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:41.323730   72639 cri.go:89] found id: ""
	I1014 15:05:41.323756   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.323764   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:41.323769   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:41.323816   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:41.360787   72639 cri.go:89] found id: ""
	I1014 15:05:41.360817   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.360825   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:41.360834   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:41.360847   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:41.403137   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:41.403172   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:41.459217   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:41.459253   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:41.473529   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:41.473558   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:41.547384   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:41.547405   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:41.547416   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:39.637477   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:42.137176   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:41.569245   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:44.066760   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:42.309212   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:44.310281   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:44.129494   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:44.144061   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:44.144129   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:44.185872   72639 cri.go:89] found id: ""
	I1014 15:05:44.185896   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.185904   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:44.185909   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:44.185955   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:44.222618   72639 cri.go:89] found id: ""
	I1014 15:05:44.222648   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.222658   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:44.222663   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:44.222723   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:44.260730   72639 cri.go:89] found id: ""
	I1014 15:05:44.260761   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.260773   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:44.260780   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:44.260872   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:44.303033   72639 cri.go:89] found id: ""
	I1014 15:05:44.303124   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.303141   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:44.303150   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:44.303223   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:44.344573   72639 cri.go:89] found id: ""
	I1014 15:05:44.344600   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.344609   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:44.344614   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:44.344660   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:44.386091   72639 cri.go:89] found id: ""
	I1014 15:05:44.386122   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.386131   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:44.386137   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:44.386199   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:44.424609   72639 cri.go:89] found id: ""
	I1014 15:05:44.424634   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.424644   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:44.424656   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:44.424724   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:44.463997   72639 cri.go:89] found id: ""
	I1014 15:05:44.464023   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.464033   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:44.464043   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:44.464057   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:44.516883   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:44.516921   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:44.530785   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:44.530820   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:44.605202   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:44.605229   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:44.605245   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:44.685277   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:44.685312   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:47.227851   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:47.242737   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:47.242817   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:47.279395   72639 cri.go:89] found id: ""
	I1014 15:05:47.279421   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.279428   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:47.279434   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:47.279495   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:47.315002   72639 cri.go:89] found id: ""
	I1014 15:05:47.315032   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.315043   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:47.315050   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:47.315120   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:47.354133   72639 cri.go:89] found id: ""
	I1014 15:05:47.354162   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.354173   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:47.354180   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:47.354245   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:47.389394   72639 cri.go:89] found id: ""
	I1014 15:05:47.389419   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.389427   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:47.389439   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:47.389498   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:47.426564   72639 cri.go:89] found id: ""
	I1014 15:05:47.426592   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.426619   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:47.426627   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:47.426676   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:47.466953   72639 cri.go:89] found id: ""
	I1014 15:05:47.466980   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.466989   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:47.466996   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:47.467065   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:47.508563   72639 cri.go:89] found id: ""
	I1014 15:05:47.508595   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.508605   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:47.508613   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:47.508665   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:47.548974   72639 cri.go:89] found id: ""
	I1014 15:05:47.549002   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.549012   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:47.549022   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:47.549036   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:47.604768   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:47.604799   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:47.619681   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:47.619717   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:47.692479   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:47.692506   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:47.692522   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:47.773711   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:47.773751   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:44.637916   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:47.137070   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:46.566472   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:48.566743   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:46.809406   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:48.811359   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:51.309691   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:50.314509   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:50.330883   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:50.330958   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:50.375090   72639 cri.go:89] found id: ""
	I1014 15:05:50.375121   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.375133   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:50.375140   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:50.375201   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:50.415000   72639 cri.go:89] found id: ""
	I1014 15:05:50.415031   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.415041   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:50.415048   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:50.415099   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:50.453937   72639 cri.go:89] found id: ""
	I1014 15:05:50.453967   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.453976   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:50.453983   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:50.454047   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:50.498752   72639 cri.go:89] found id: ""
	I1014 15:05:50.498778   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.498785   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:50.498790   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:50.498858   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:50.537819   72639 cri.go:89] found id: ""
	I1014 15:05:50.537855   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.537864   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:50.537871   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:50.537920   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:50.577141   72639 cri.go:89] found id: ""
	I1014 15:05:50.577168   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.577179   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:50.577186   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:50.577250   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:50.612462   72639 cri.go:89] found id: ""
	I1014 15:05:50.612504   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.612527   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:50.612535   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:50.612597   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:50.648816   72639 cri.go:89] found id: ""
	I1014 15:05:50.648845   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.648855   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:50.648866   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:50.648879   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:50.662546   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:50.662578   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:50.733128   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:50.733152   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:50.733166   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:50.810884   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:50.810913   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:50.855878   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:50.855905   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:49.637103   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:52.137615   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:50.567300   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:53.066883   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:53.810090   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:56.312861   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:53.413608   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:53.428380   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:53.428453   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:53.463440   72639 cri.go:89] found id: ""
	I1014 15:05:53.463464   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.463473   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:53.463479   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:53.463534   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:53.499024   72639 cri.go:89] found id: ""
	I1014 15:05:53.499050   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.499058   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:53.499064   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:53.499121   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:53.534396   72639 cri.go:89] found id: ""
	I1014 15:05:53.534425   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.534435   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:53.534442   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:53.534504   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:53.571396   72639 cri.go:89] found id: ""
	I1014 15:05:53.571422   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.571432   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:53.571439   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:53.571496   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:53.606219   72639 cri.go:89] found id: ""
	I1014 15:05:53.606247   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.606254   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:53.606260   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:53.606309   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:53.644906   72639 cri.go:89] found id: ""
	I1014 15:05:53.644929   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.644938   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:53.644945   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:53.645005   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:53.684764   72639 cri.go:89] found id: ""
	I1014 15:05:53.684795   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.684808   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:53.684817   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:53.684872   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:53.720559   72639 cri.go:89] found id: ""
	I1014 15:05:53.720587   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.720596   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:53.720605   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:53.720626   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:53.773759   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:53.773798   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:53.787688   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:53.787717   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:53.863141   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:53.863163   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:53.863176   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:53.942949   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:53.942989   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:56.487207   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:56.500670   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:56.500730   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:56.533851   72639 cri.go:89] found id: ""
	I1014 15:05:56.533882   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.533894   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:56.533901   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:56.533964   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:56.573169   72639 cri.go:89] found id: ""
	I1014 15:05:56.573194   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.573201   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:56.573207   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:56.573260   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:56.608110   72639 cri.go:89] found id: ""
	I1014 15:05:56.608138   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.608151   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:56.608158   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:56.608218   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:56.646030   72639 cri.go:89] found id: ""
	I1014 15:05:56.646054   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.646061   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:56.646067   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:56.646112   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:56.689427   72639 cri.go:89] found id: ""
	I1014 15:05:56.689455   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.689465   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:56.689473   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:56.689528   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:56.723831   72639 cri.go:89] found id: ""
	I1014 15:05:56.723856   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.723865   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:56.723871   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:56.723928   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:56.756700   72639 cri.go:89] found id: ""
	I1014 15:05:56.756725   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.756734   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:56.756741   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:56.756808   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:56.788201   72639 cri.go:89] found id: ""
	I1014 15:05:56.788228   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.788235   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:56.788242   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:56.788253   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:56.847840   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:56.847876   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:56.861984   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:56.862016   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:56.933190   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:56.933214   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:56.933226   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:57.015909   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:57.015958   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:54.636591   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:56.638712   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:59.137008   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:55.566153   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:57.566963   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:00.067261   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:58.810164   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:00.811078   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:59.559421   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:59.575593   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:59.575673   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:59.611369   72639 cri.go:89] found id: ""
	I1014 15:05:59.611399   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.611409   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:59.611416   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:59.611485   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:59.645786   72639 cri.go:89] found id: ""
	I1014 15:05:59.645817   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.645827   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:59.645834   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:59.645895   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:59.681463   72639 cri.go:89] found id: ""
	I1014 15:05:59.681491   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.681499   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:59.681504   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:59.681553   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:59.723738   72639 cri.go:89] found id: ""
	I1014 15:05:59.723767   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.723775   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:59.723782   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:59.723845   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:59.763890   72639 cri.go:89] found id: ""
	I1014 15:05:59.763919   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.763958   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:59.763966   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:59.764027   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:59.802981   72639 cri.go:89] found id: ""
	I1014 15:05:59.803007   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.803015   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:59.803021   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:59.803074   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:59.841887   72639 cri.go:89] found id: ""
	I1014 15:05:59.841916   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.841927   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:59.841934   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:59.841989   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:59.877190   72639 cri.go:89] found id: ""
	I1014 15:05:59.877221   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.877231   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:59.877240   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:59.877254   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:59.890838   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:59.890864   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:59.970122   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:59.970147   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:59.970163   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:00.058994   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:00.059032   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:00.103227   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:00.103262   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:02.655437   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:02.671240   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:02.671307   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:02.708826   72639 cri.go:89] found id: ""
	I1014 15:06:02.708859   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.708871   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:02.708879   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:02.708943   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:02.744504   72639 cri.go:89] found id: ""
	I1014 15:06:02.744535   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.744546   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:02.744553   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:02.744615   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:02.781144   72639 cri.go:89] found id: ""
	I1014 15:06:02.781180   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.781193   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:02.781201   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:02.781281   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:02.819527   72639 cri.go:89] found id: ""
	I1014 15:06:02.819558   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.819567   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:02.819572   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:02.819630   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:02.855653   72639 cri.go:89] found id: ""
	I1014 15:06:02.855683   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.855693   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:02.855700   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:02.855761   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:02.900843   72639 cri.go:89] found id: ""
	I1014 15:06:02.900876   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.900888   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:02.900896   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:02.900961   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:02.941812   72639 cri.go:89] found id: ""
	I1014 15:06:02.941840   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.941851   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:02.941857   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:02.941919   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:02.980213   72639 cri.go:89] found id: ""
	I1014 15:06:02.980238   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.980246   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:02.980253   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:02.980265   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:00.130683   72173 pod_ready.go:82] duration metric: took 4m0.000550021s for pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace to be "Ready" ...
	E1014 15:06:00.130707   72173 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace to be "Ready" (will not retry!)
	I1014 15:06:00.130723   72173 pod_ready.go:39] duration metric: took 4m13.708579322s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:06:00.130753   72173 kubeadm.go:597] duration metric: took 4m21.979284634s to restartPrimaryControlPlane
	W1014 15:06:00.130836   72173 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1014 15:06:00.130870   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 15:06:02.566183   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:05.066638   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:03.309953   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:05.311484   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:03.034263   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:03.034301   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:03.048574   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:03.048606   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:03.121902   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:03.121925   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:03.121939   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:03.197407   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:03.197445   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:05.737723   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:05.751892   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:05.751959   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:05.789209   72639 cri.go:89] found id: ""
	I1014 15:06:05.789235   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.789242   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:05.789247   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:05.789294   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:05.826189   72639 cri.go:89] found id: ""
	I1014 15:06:05.826220   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.826229   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:05.826236   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:05.826344   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:05.864264   72639 cri.go:89] found id: ""
	I1014 15:06:05.864297   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.864308   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:05.864314   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:05.864371   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:05.899697   72639 cri.go:89] found id: ""
	I1014 15:06:05.899724   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.899732   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:05.899737   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:05.899784   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:05.939552   72639 cri.go:89] found id: ""
	I1014 15:06:05.939583   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.939593   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:05.939601   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:05.939668   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:05.999732   72639 cri.go:89] found id: ""
	I1014 15:06:05.999759   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.999770   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:05.999776   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:05.999834   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:06.036228   72639 cri.go:89] found id: ""
	I1014 15:06:06.036259   72639 logs.go:282] 0 containers: []
	W1014 15:06:06.036276   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:06.036284   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:06.036343   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:06.071744   72639 cri.go:89] found id: ""
	I1014 15:06:06.071774   72639 logs.go:282] 0 containers: []
	W1014 15:06:06.071785   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:06.071795   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:06.071808   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:06.125737   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:06.125774   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:06.139150   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:06.139177   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:06.206731   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:06.206757   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:06.206773   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:06.287183   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:06.287218   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:07.565983   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:10.065897   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:07.809832   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:10.309290   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:08.827345   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:08.841290   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:08.841384   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:08.877789   72639 cri.go:89] found id: ""
	I1014 15:06:08.877815   72639 logs.go:282] 0 containers: []
	W1014 15:06:08.877824   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:08.877832   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:08.877895   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:08.912491   72639 cri.go:89] found id: ""
	I1014 15:06:08.912517   72639 logs.go:282] 0 containers: []
	W1014 15:06:08.912525   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:08.912530   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:08.912586   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:08.948727   72639 cri.go:89] found id: ""
	I1014 15:06:08.948755   72639 logs.go:282] 0 containers: []
	W1014 15:06:08.948765   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:08.948773   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:08.948837   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:08.984397   72639 cri.go:89] found id: ""
	I1014 15:06:08.984428   72639 logs.go:282] 0 containers: []
	W1014 15:06:08.984440   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:08.984448   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:08.984498   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:09.019222   72639 cri.go:89] found id: ""
	I1014 15:06:09.019250   72639 logs.go:282] 0 containers: []
	W1014 15:06:09.019260   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:09.019268   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:09.019329   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:09.058309   72639 cri.go:89] found id: ""
	I1014 15:06:09.058335   72639 logs.go:282] 0 containers: []
	W1014 15:06:09.058346   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:09.058353   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:09.058415   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:09.096508   72639 cri.go:89] found id: ""
	I1014 15:06:09.096535   72639 logs.go:282] 0 containers: []
	W1014 15:06:09.096544   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:09.096550   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:09.096599   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:09.134564   72639 cri.go:89] found id: ""
	I1014 15:06:09.134611   72639 logs.go:282] 0 containers: []
	W1014 15:06:09.134624   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:09.134635   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:09.134647   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:09.188220   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:09.188254   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:09.203119   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:09.203149   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:09.279357   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:09.279379   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:09.279390   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:09.364219   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:09.364253   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:11.910976   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:11.926067   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:11.926149   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:11.966238   72639 cri.go:89] found id: ""
	I1014 15:06:11.966271   72639 logs.go:282] 0 containers: []
	W1014 15:06:11.966282   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:11.966289   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:11.966350   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:12.002580   72639 cri.go:89] found id: ""
	I1014 15:06:12.002617   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.002630   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:12.002637   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:12.002698   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:12.037014   72639 cri.go:89] found id: ""
	I1014 15:06:12.037037   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.037046   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:12.037051   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:12.037111   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:12.070937   72639 cri.go:89] found id: ""
	I1014 15:06:12.070957   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.070965   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:12.070970   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:12.071019   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:12.104920   72639 cri.go:89] found id: ""
	I1014 15:06:12.104949   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.104960   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:12.104967   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:12.105026   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:12.142498   72639 cri.go:89] found id: ""
	I1014 15:06:12.142530   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.142544   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:12.142555   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:12.142628   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:12.179590   72639 cri.go:89] found id: ""
	I1014 15:06:12.179613   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.179621   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:12.179627   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:12.179675   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:12.213947   72639 cri.go:89] found id: ""
	I1014 15:06:12.213973   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.213981   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:12.213989   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:12.213998   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:12.268214   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:12.268257   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:12.283561   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:12.283594   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:12.382344   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:12.382367   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:12.382377   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:12.469818   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:12.469854   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:12.066154   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:14.565962   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:12.310167   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:14.810273   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:15.011529   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:15.025355   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:15.025423   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:15.060996   72639 cri.go:89] found id: ""
	I1014 15:06:15.061028   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.061040   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:15.061047   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:15.061120   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:15.103050   72639 cri.go:89] found id: ""
	I1014 15:06:15.103074   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.103082   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:15.103088   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:15.103140   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:15.140095   72639 cri.go:89] found id: ""
	I1014 15:06:15.140122   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.140132   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:15.140139   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:15.140207   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:15.174612   72639 cri.go:89] found id: ""
	I1014 15:06:15.174642   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.174654   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:15.174669   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:15.174737   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:15.209116   72639 cri.go:89] found id: ""
	I1014 15:06:15.209142   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.209152   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:15.209160   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:15.209221   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:15.242857   72639 cri.go:89] found id: ""
	I1014 15:06:15.242885   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.242896   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:15.242902   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:15.242966   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:15.283038   72639 cri.go:89] found id: ""
	I1014 15:06:15.283066   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.283076   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:15.283083   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:15.283144   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:15.319577   72639 cri.go:89] found id: ""
	I1014 15:06:15.319604   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.319612   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:15.319622   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:15.319636   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:15.391485   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:15.391506   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:15.391520   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:15.470140   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:15.470192   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:15.513098   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:15.513132   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:15.568275   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:15.568305   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:17.065956   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:19.566207   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:17.308463   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:19.309185   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:21.310841   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:18.085915   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:18.113889   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:18.113958   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:18.167486   72639 cri.go:89] found id: ""
	I1014 15:06:18.167511   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.167519   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:18.167525   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:18.167568   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:18.230244   72639 cri.go:89] found id: ""
	I1014 15:06:18.230273   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.230283   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:18.230291   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:18.230351   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:18.264223   72639 cri.go:89] found id: ""
	I1014 15:06:18.264252   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.264261   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:18.264268   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:18.264332   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:18.298719   72639 cri.go:89] found id: ""
	I1014 15:06:18.298750   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.298762   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:18.298770   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:18.298843   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:18.335113   72639 cri.go:89] found id: ""
	I1014 15:06:18.335140   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.335147   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:18.335153   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:18.335212   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:18.373690   72639 cri.go:89] found id: ""
	I1014 15:06:18.373721   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.373736   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:18.373743   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:18.373792   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:18.411138   72639 cri.go:89] found id: ""
	I1014 15:06:18.411171   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.411182   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:18.411190   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:18.411250   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:18.451281   72639 cri.go:89] found id: ""
	I1014 15:06:18.451306   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.451314   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:18.451323   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:18.451334   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:18.502141   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:18.502178   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:18.517449   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:18.517476   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:18.586737   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:18.586760   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:18.586776   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:18.670234   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:18.670270   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:21.210200   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:21.222998   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:21.223053   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:21.257132   72639 cri.go:89] found id: ""
	I1014 15:06:21.257160   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.257167   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:21.257174   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:21.257237   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:21.290905   72639 cri.go:89] found id: ""
	I1014 15:06:21.290933   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.290945   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:21.290952   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:21.291007   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:21.331067   72639 cri.go:89] found id: ""
	I1014 15:06:21.331098   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.331108   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:21.331128   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:21.331178   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:21.370042   72639 cri.go:89] found id: ""
	I1014 15:06:21.370069   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.370077   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:21.370083   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:21.370141   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:21.414900   72639 cri.go:89] found id: ""
	I1014 15:06:21.414920   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.414932   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:21.414938   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:21.414985   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:21.452914   72639 cri.go:89] found id: ""
	I1014 15:06:21.452941   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.452952   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:21.452960   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:21.453022   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:21.486725   72639 cri.go:89] found id: ""
	I1014 15:06:21.486752   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.486763   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:21.486770   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:21.486831   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:21.524012   72639 cri.go:89] found id: ""
	I1014 15:06:21.524034   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.524042   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:21.524049   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:21.524059   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:21.603238   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:21.603279   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:21.645655   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:21.645689   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:21.701053   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:21.701092   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:21.715515   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:21.715542   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:21.781831   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:22.067051   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:24.567173   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:21.810342   72390 pod_ready.go:82] duration metric: took 4m0.007657098s for pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace to be "Ready" ...
	E1014 15:06:21.810365   72390 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1014 15:06:21.810382   72390 pod_ready.go:39] duration metric: took 4m7.92113061s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:06:21.810401   72390 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:06:21.810433   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:21.810488   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:21.856565   72390 cri.go:89] found id: "a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f"
	I1014 15:06:21.856587   72390 cri.go:89] found id: ""
	I1014 15:06:21.856594   72390 logs.go:282] 1 containers: [a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f]
	I1014 15:06:21.856654   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:21.861036   72390 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:21.861091   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:21.898486   72390 cri.go:89] found id: "0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69"
	I1014 15:06:21.898517   72390 cri.go:89] found id: ""
	I1014 15:06:21.898528   72390 logs.go:282] 1 containers: [0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69]
	I1014 15:06:21.898587   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:21.903145   72390 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:21.903245   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:21.941127   72390 cri.go:89] found id: "6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1"
	I1014 15:06:21.941164   72390 cri.go:89] found id: ""
	I1014 15:06:21.941173   72390 logs.go:282] 1 containers: [6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1]
	I1014 15:06:21.941232   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:21.945584   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:21.945658   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:21.994370   72390 cri.go:89] found id: "be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa"
	I1014 15:06:21.994398   72390 cri.go:89] found id: ""
	I1014 15:06:21.994407   72390 logs.go:282] 1 containers: [be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa]
	I1014 15:06:21.994454   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:21.998498   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:21.998547   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:22.037415   72390 cri.go:89] found id: "8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42"
	I1014 15:06:22.037443   72390 cri.go:89] found id: ""
	I1014 15:06:22.037453   72390 logs.go:282] 1 containers: [8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42]
	I1014 15:06:22.037507   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:22.041882   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:22.041947   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:22.079219   72390 cri.go:89] found id: "7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4"
	I1014 15:06:22.079243   72390 cri.go:89] found id: ""
	I1014 15:06:22.079252   72390 logs.go:282] 1 containers: [7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4]
	I1014 15:06:22.079319   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:22.083373   72390 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:22.083432   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:22.120795   72390 cri.go:89] found id: ""
	I1014 15:06:22.120818   72390 logs.go:282] 0 containers: []
	W1014 15:06:22.120825   72390 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:22.120832   72390 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1014 15:06:22.120889   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1014 15:06:22.158545   72390 cri.go:89] found id: "54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81"
	I1014 15:06:22.158571   72390 cri.go:89] found id: "48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076"
	I1014 15:06:22.158577   72390 cri.go:89] found id: ""
	I1014 15:06:22.158586   72390 logs.go:282] 2 containers: [54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81 48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076]
	I1014 15:06:22.158662   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:22.162500   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:22.166734   72390 logs.go:123] Gathering logs for storage-provisioner [48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076] ...
	I1014 15:06:22.166759   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076"
	I1014 15:06:22.202711   72390 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:22.202736   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:22.279594   72390 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:22.279635   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:22.293836   72390 logs.go:123] Gathering logs for coredns [6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1] ...
	I1014 15:06:22.293863   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1"
	I1014 15:06:22.335451   72390 logs.go:123] Gathering logs for kube-scheduler [be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa] ...
	I1014 15:06:22.335478   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa"
	I1014 15:06:22.374244   72390 logs.go:123] Gathering logs for kube-proxy [8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42] ...
	I1014 15:06:22.374274   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42"
	I1014 15:06:22.422538   72390 logs.go:123] Gathering logs for kube-controller-manager [7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4] ...
	I1014 15:06:22.422567   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4"
	I1014 15:06:22.486973   72390 logs.go:123] Gathering logs for storage-provisioner [54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81] ...
	I1014 15:06:22.487009   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81"
	I1014 15:06:22.528871   72390 logs.go:123] Gathering logs for container status ...
	I1014 15:06:22.528899   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:22.575947   72390 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:22.575982   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 15:06:22.713356   72390 logs.go:123] Gathering logs for kube-apiserver [a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f] ...
	I1014 15:06:22.713387   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f"
	I1014 15:06:22.760315   72390 logs.go:123] Gathering logs for etcd [0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69] ...
	I1014 15:06:22.760348   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69"
	I1014 15:06:22.811144   72390 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:22.811169   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:25.780847   72390 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:25.800698   72390 api_server.go:72] duration metric: took 4m18.640749756s to wait for apiserver process to appear ...
	I1014 15:06:25.800733   72390 api_server.go:88] waiting for apiserver healthz status ...
	I1014 15:06:25.800779   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:25.800845   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:25.841159   72390 cri.go:89] found id: "a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f"
	I1014 15:06:25.841193   72390 cri.go:89] found id: ""
	I1014 15:06:25.841203   72390 logs.go:282] 1 containers: [a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f]
	I1014 15:06:25.841259   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:25.845503   72390 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:25.845560   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:25.884122   72390 cri.go:89] found id: "0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69"
	I1014 15:06:25.884151   72390 cri.go:89] found id: ""
	I1014 15:06:25.884161   72390 logs.go:282] 1 containers: [0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69]
	I1014 15:06:25.884223   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:25.889638   72390 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:25.889700   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:25.931199   72390 cri.go:89] found id: "6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1"
	I1014 15:06:25.931220   72390 cri.go:89] found id: ""
	I1014 15:06:25.931230   72390 logs.go:282] 1 containers: [6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1]
	I1014 15:06:25.931285   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:25.936063   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:25.936127   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:25.979162   72390 cri.go:89] found id: "be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa"
	I1014 15:06:25.979188   72390 cri.go:89] found id: ""
	I1014 15:06:25.979197   72390 logs.go:282] 1 containers: [be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa]
	I1014 15:06:25.979254   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:25.983550   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:25.983611   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:26.021835   72390 cri.go:89] found id: "8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42"
	I1014 15:06:26.021854   72390 cri.go:89] found id: ""
	I1014 15:06:26.021862   72390 logs.go:282] 1 containers: [8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42]
	I1014 15:06:26.021911   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:26.026005   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:26.026073   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:26.067719   72390 cri.go:89] found id: "7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4"
	I1014 15:06:26.067740   72390 cri.go:89] found id: ""
	I1014 15:06:26.067749   72390 logs.go:282] 1 containers: [7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4]
	I1014 15:06:26.067803   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:26.073387   72390 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:26.073453   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:26.116305   72390 cri.go:89] found id: ""
	I1014 15:06:26.116336   72390 logs.go:282] 0 containers: []
	W1014 15:06:26.116349   72390 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:26.116358   72390 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1014 15:06:26.116427   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1014 15:06:26.156959   72390 cri.go:89] found id: "54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81"
	I1014 15:06:26.156985   72390 cri.go:89] found id: "48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076"
	I1014 15:06:26.156991   72390 cri.go:89] found id: ""
	I1014 15:06:26.156999   72390 logs.go:282] 2 containers: [54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81 48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076]
	I1014 15:06:26.157051   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:26.161437   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:26.165696   72390 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:26.165718   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 15:06:26.282026   72390 logs.go:123] Gathering logs for coredns [6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1] ...
	I1014 15:06:26.282056   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1"
	I1014 15:06:26.333504   72390 logs.go:123] Gathering logs for kube-scheduler [be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa] ...
	I1014 15:06:26.333543   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa"
	I1014 15:06:26.376435   72390 logs.go:123] Gathering logs for storage-provisioner [54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81] ...
	I1014 15:06:26.376469   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81"
	I1014 15:06:26.416633   72390 logs.go:123] Gathering logs for storage-provisioner [48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076] ...
	I1014 15:06:26.416660   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076"
	I1014 15:06:26.388546   72173 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.257645941s)
	I1014 15:06:26.388631   72173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:06:26.407118   72173 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:06:26.417718   72173 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:06:26.428364   72173 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:06:26.428391   72173 kubeadm.go:157] found existing configuration files:
	
	I1014 15:06:26.428451   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:06:26.437953   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:06:26.438026   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:06:26.448356   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:06:26.458476   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:06:26.458541   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:06:26.469941   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:06:26.482934   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:06:26.483016   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:06:26.495682   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:06:26.506113   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:06:26.506176   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:06:26.517784   72173 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 15:06:26.568927   72173 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1014 15:06:26.568978   72173 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 15:06:26.685727   72173 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 15:06:26.685855   72173 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 15:06:26.685963   72173 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 15:06:26.693948   72173 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 15:06:26.696177   72173 out.go:235]   - Generating certificates and keys ...
	I1014 15:06:26.696269   72173 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 15:06:26.696318   72173 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 15:06:26.696388   72173 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 15:06:26.696438   72173 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1014 15:06:26.696495   72173 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 15:06:26.696536   72173 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1014 15:06:26.696588   72173 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1014 15:06:26.696639   72173 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1014 15:06:26.696696   72173 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 15:06:26.696760   72173 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 15:06:26.700275   72173 kubeadm.go:310] [certs] Using the existing "sa" key
	I1014 15:06:26.700406   72173 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 15:06:26.831734   72173 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 15:06:27.336318   72173 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 15:06:27.574604   72173 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 15:06:27.681370   72173 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 15:06:27.788769   72173 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 15:06:27.789324   72173 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 15:06:27.791842   72173 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 15:06:24.282018   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:24.295177   72639 kubeadm.go:597] duration metric: took 4m4.450514459s to restartPrimaryControlPlane
	W1014 15:06:24.295255   72639 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1014 15:06:24.295283   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 15:06:27.793786   72173 out.go:235]   - Booting up control plane ...
	I1014 15:06:27.793891   72173 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 15:06:27.793980   72173 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 15:06:27.794089   72173 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 15:06:27.815223   72173 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 15:06:27.821764   72173 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 15:06:27.821817   72173 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 15:06:27.965327   72173 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 15:06:27.965707   72173 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 15:06:28.967332   72173 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001260991s
	I1014 15:06:28.967473   72173 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1014 15:06:29.238014   72639 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.942706631s)
	I1014 15:06:29.238096   72639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:06:29.258804   72639 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:06:29.269440   72639 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:06:29.279613   72639 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:06:29.279633   72639 kubeadm.go:157] found existing configuration files:
	
	I1014 15:06:29.279672   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:06:29.292840   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:06:29.292912   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:06:29.306987   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:06:29.319896   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:06:29.319970   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:06:29.333974   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:06:29.343993   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:06:29.344051   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:06:29.354691   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:06:29.364354   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:06:29.364422   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:06:29.374674   72639 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 15:06:29.452845   72639 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1014 15:06:29.452961   72639 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 15:06:29.618263   72639 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 15:06:29.618446   72639 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 15:06:29.618582   72639 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1014 15:06:29.813387   72639 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 15:06:29.815501   72639 out.go:235]   - Generating certificates and keys ...
	I1014 15:06:29.815610   72639 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 15:06:29.815697   72639 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 15:06:29.815799   72639 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 15:06:29.815879   72639 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1014 15:06:29.815971   72639 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 15:06:29.816039   72639 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1014 15:06:29.816125   72639 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1014 15:06:29.816206   72639 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1014 15:06:29.816307   72639 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 15:06:29.816404   72639 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 15:06:29.816454   72639 kubeadm.go:310] [certs] Using the existing "sa" key
	I1014 15:06:29.816531   72639 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 15:06:29.944505   72639 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 15:06:30.106467   72639 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 15:06:30.226356   72639 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 15:06:30.322169   72639 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 15:06:30.342382   72639 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 15:06:30.343666   72639 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 15:06:30.343736   72639 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 15:06:30.507000   72639 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 15:06:27.066923   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:29.068434   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:26.453659   72390 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:26.453693   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:26.900485   72390 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:26.900518   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:26.925431   72390 logs.go:123] Gathering logs for kube-apiserver [a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f] ...
	I1014 15:06:26.925461   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f"
	I1014 15:06:26.986104   72390 logs.go:123] Gathering logs for etcd [0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69] ...
	I1014 15:06:26.986140   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69"
	I1014 15:06:27.037557   72390 logs.go:123] Gathering logs for kube-proxy [8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42] ...
	I1014 15:06:27.037600   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42"
	I1014 15:06:27.084362   72390 logs.go:123] Gathering logs for kube-controller-manager [7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4] ...
	I1014 15:06:27.084397   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4"
	I1014 15:06:27.138680   72390 logs.go:123] Gathering logs for container status ...
	I1014 15:06:27.138713   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:27.191283   72390 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:27.191314   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:29.761781   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:06:29.769020   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 200:
	ok
	I1014 15:06:29.770210   72390 api_server.go:141] control plane version: v1.31.1
	I1014 15:06:29.770232   72390 api_server.go:131] duration metric: took 3.969490314s to wait for apiserver health ...
	I1014 15:06:29.770242   72390 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 15:06:29.770268   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:29.770328   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:29.827908   72390 cri.go:89] found id: "a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f"
	I1014 15:06:29.827930   72390 cri.go:89] found id: ""
	I1014 15:06:29.827939   72390 logs.go:282] 1 containers: [a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f]
	I1014 15:06:29.827994   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:29.837786   72390 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:29.837864   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:29.877625   72390 cri.go:89] found id: "0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69"
	I1014 15:06:29.877661   72390 cri.go:89] found id: ""
	I1014 15:06:29.877672   72390 logs.go:282] 1 containers: [0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69]
	I1014 15:06:29.877738   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:29.882502   72390 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:29.882578   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:29.923002   72390 cri.go:89] found id: "6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1"
	I1014 15:06:29.923027   72390 cri.go:89] found id: ""
	I1014 15:06:29.923037   72390 logs.go:282] 1 containers: [6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1]
	I1014 15:06:29.923094   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:29.927559   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:29.927621   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:29.966098   72390 cri.go:89] found id: "be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa"
	I1014 15:06:29.966124   72390 cri.go:89] found id: ""
	I1014 15:06:29.966133   72390 logs.go:282] 1 containers: [be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa]
	I1014 15:06:29.966189   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:29.972287   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:29.972371   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:30.024389   72390 cri.go:89] found id: "8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42"
	I1014 15:06:30.024414   72390 cri.go:89] found id: ""
	I1014 15:06:30.024423   72390 logs.go:282] 1 containers: [8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42]
	I1014 15:06:30.024481   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:30.029914   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:30.029976   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:30.085703   72390 cri.go:89] found id: "7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4"
	I1014 15:06:30.085727   72390 cri.go:89] found id: ""
	I1014 15:06:30.085737   72390 logs.go:282] 1 containers: [7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4]
	I1014 15:06:30.085806   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:30.097004   72390 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:30.097098   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:30.147464   72390 cri.go:89] found id: ""
	I1014 15:06:30.147494   72390 logs.go:282] 0 containers: []
	W1014 15:06:30.147505   72390 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:30.147512   72390 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1014 15:06:30.147573   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1014 15:06:30.195003   72390 cri.go:89] found id: "54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81"
	I1014 15:06:30.195030   72390 cri.go:89] found id: "48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076"
	I1014 15:06:30.195036   72390 cri.go:89] found id: ""
	I1014 15:06:30.195045   72390 logs.go:282] 2 containers: [54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81 48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076]
	I1014 15:06:30.195099   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:30.199436   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:30.204079   72390 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:30.204105   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:30.221021   72390 logs.go:123] Gathering logs for kube-apiserver [a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f] ...
	I1014 15:06:30.221049   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f"
	I1014 15:06:30.280979   72390 logs.go:123] Gathering logs for coredns [6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1] ...
	I1014 15:06:30.281013   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1"
	I1014 15:06:30.339261   72390 logs.go:123] Gathering logs for kube-proxy [8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42] ...
	I1014 15:06:30.339291   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42"
	I1014 15:06:30.390034   72390 logs.go:123] Gathering logs for kube-controller-manager [7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4] ...
	I1014 15:06:30.390081   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4"
	I1014 15:06:30.461221   72390 logs.go:123] Gathering logs for storage-provisioner [54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81] ...
	I1014 15:06:30.461262   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81"
	I1014 15:06:30.504100   72390 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:30.504134   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:30.870561   72390 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:30.870629   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:30.942952   72390 logs.go:123] Gathering logs for container status ...
	I1014 15:06:30.942998   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:30.995435   72390 logs.go:123] Gathering logs for etcd [0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69] ...
	I1014 15:06:30.995484   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69"
	I1014 15:06:31.038804   72390 logs.go:123] Gathering logs for kube-scheduler [be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa] ...
	I1014 15:06:31.038839   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa"
	I1014 15:06:31.080187   72390 logs.go:123] Gathering logs for storage-provisioner [48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076] ...
	I1014 15:06:31.080218   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076"
	I1014 15:06:31.122248   72390 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:31.122295   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 15:06:30.509157   72639 out.go:235]   - Booting up control plane ...
	I1014 15:06:30.509293   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 15:06:30.518440   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 15:06:30.520572   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 15:06:30.522337   72639 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 15:06:30.524996   72639 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1014 15:06:33.742510   72390 system_pods.go:59] 8 kube-system pods found
	I1014 15:06:33.742539   72390 system_pods.go:61] "coredns-7c65d6cfc9-994hx" [b0291ce4-5503-4bb1-8e36-d956b115c3ac] Running
	I1014 15:06:33.742546   72390 system_pods.go:61] "etcd-default-k8s-diff-port-201291" [5e359915-fb2e-46d5-a1a8-826341943fc3] Running
	I1014 15:06:33.742552   72390 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-201291" [047bd813-aaab-428e-ab47-12932195c91f] Running
	I1014 15:06:33.742557   72390 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-201291" [6eb0eb91-21ce-4e56-9758-fbd453b0d4df] Running
	I1014 15:06:33.742562   72390 system_pods.go:61] "kube-proxy-rh82t" [1dcd3c39-1bfe-40ac-a012-ea17ea1dfb6d] Running
	I1014 15:06:33.742566   72390 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-201291" [aaeefd23-6adc-4c69-acca-38e3f3172b2e] Running
	I1014 15:06:33.742576   72390 system_pods.go:61] "metrics-server-6867b74b74-bcrqs" [508697cd-cf31-4078-8985-5c0b77966695] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:06:33.742582   72390 system_pods.go:61] "storage-provisioner" [62925b5e-ec1d-4d5b-aa70-a4fc555db52d] Running
	I1014 15:06:33.742615   72390 system_pods.go:74] duration metric: took 3.972347536s to wait for pod list to return data ...
	I1014 15:06:33.742628   72390 default_sa.go:34] waiting for default service account to be created ...
	I1014 15:06:33.744532   72390 default_sa.go:45] found service account: "default"
	I1014 15:06:33.744551   72390 default_sa.go:55] duration metric: took 1.918153ms for default service account to be created ...
	I1014 15:06:33.744558   72390 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 15:06:33.750292   72390 system_pods.go:86] 8 kube-system pods found
	I1014 15:06:33.750315   72390 system_pods.go:89] "coredns-7c65d6cfc9-994hx" [b0291ce4-5503-4bb1-8e36-d956b115c3ac] Running
	I1014 15:06:33.750320   72390 system_pods.go:89] "etcd-default-k8s-diff-port-201291" [5e359915-fb2e-46d5-a1a8-826341943fc3] Running
	I1014 15:06:33.750324   72390 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-201291" [047bd813-aaab-428e-ab47-12932195c91f] Running
	I1014 15:06:33.750329   72390 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-201291" [6eb0eb91-21ce-4e56-9758-fbd453b0d4df] Running
	I1014 15:06:33.750332   72390 system_pods.go:89] "kube-proxy-rh82t" [1dcd3c39-1bfe-40ac-a012-ea17ea1dfb6d] Running
	I1014 15:06:33.750335   72390 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-201291" [aaeefd23-6adc-4c69-acca-38e3f3172b2e] Running
	I1014 15:06:33.750341   72390 system_pods.go:89] "metrics-server-6867b74b74-bcrqs" [508697cd-cf31-4078-8985-5c0b77966695] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:06:33.750346   72390 system_pods.go:89] "storage-provisioner" [62925b5e-ec1d-4d5b-aa70-a4fc555db52d] Running
	I1014 15:06:33.750352   72390 system_pods.go:126] duration metric: took 5.790549ms to wait for k8s-apps to be running ...
	I1014 15:06:33.750358   72390 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 15:06:33.750398   72390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:06:33.770342   72390 system_svc.go:56] duration metric: took 19.978034ms WaitForService to wait for kubelet
	I1014 15:06:33.770370   72390 kubeadm.go:582] duration metric: took 4m26.610427104s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 15:06:33.770392   72390 node_conditions.go:102] verifying NodePressure condition ...
	I1014 15:06:33.774149   72390 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 15:06:33.774176   72390 node_conditions.go:123] node cpu capacity is 2
	I1014 15:06:33.774190   72390 node_conditions.go:105] duration metric: took 3.792746ms to run NodePressure ...
	I1014 15:06:33.774203   72390 start.go:241] waiting for startup goroutines ...
	I1014 15:06:33.774217   72390 start.go:246] waiting for cluster config update ...
	I1014 15:06:33.774232   72390 start.go:255] writing updated cluster config ...
	I1014 15:06:33.774560   72390 ssh_runner.go:195] Run: rm -f paused
	I1014 15:06:33.823879   72390 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1014 15:06:33.825962   72390 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-201291" cluster and "default" namespace by default
	I1014 15:06:33.976430   72173 kubeadm.go:310] [api-check] The API server is healthy after 5.00773575s
	I1014 15:06:33.990496   72173 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 15:06:34.010821   72173 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 15:06:34.051244   72173 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 15:06:34.051513   72173 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-989166 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 15:06:34.066447   72173 kubeadm.go:310] [bootstrap-token] Using token: 46olqw.t0lfd7bmyz0olhbh
	I1014 15:06:34.067925   72173 out.go:235]   - Configuring RBAC rules ...
	I1014 15:06:34.068073   72173 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 15:06:34.077775   72173 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 15:06:34.097676   72173 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 15:06:34.103212   72173 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 15:06:34.112640   72173 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 15:06:34.119886   72173 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 15:06:34.382372   72173 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 15:06:34.825514   72173 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1014 15:06:35.383856   72173 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1014 15:06:35.383877   72173 kubeadm.go:310] 
	I1014 15:06:35.383939   72173 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1014 15:06:35.383976   72173 kubeadm.go:310] 
	I1014 15:06:35.384094   72173 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1014 15:06:35.384103   72173 kubeadm.go:310] 
	I1014 15:06:35.384136   72173 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1014 15:06:35.384223   72173 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 15:06:35.384286   72173 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 15:06:35.384311   72173 kubeadm.go:310] 
	I1014 15:06:35.384414   72173 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1014 15:06:35.384430   72173 kubeadm.go:310] 
	I1014 15:06:35.384499   72173 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 15:06:35.384512   72173 kubeadm.go:310] 
	I1014 15:06:35.384597   72173 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1014 15:06:35.384685   72173 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 15:06:35.384744   72173 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 15:06:35.384750   72173 kubeadm.go:310] 
	I1014 15:06:35.384821   72173 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 15:06:35.384928   72173 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1014 15:06:35.384940   72173 kubeadm.go:310] 
	I1014 15:06:35.385047   72173 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 46olqw.t0lfd7bmyz0olhbh \
	I1014 15:06:35.385192   72173 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 \
	I1014 15:06:35.385224   72173 kubeadm.go:310] 	--control-plane 
	I1014 15:06:35.385231   72173 kubeadm.go:310] 
	I1014 15:06:35.385322   72173 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1014 15:06:35.385334   72173 kubeadm.go:310] 
	I1014 15:06:35.385449   72173 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 46olqw.t0lfd7bmyz0olhbh \
	I1014 15:06:35.385588   72173 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 
	I1014 15:06:35.386604   72173 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 15:06:35.386674   72173 cni.go:84] Creating CNI manager for ""
	I1014 15:06:35.386689   72173 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:06:35.388617   72173 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 15:06:31.069009   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:33.565864   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:35.390017   72173 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 15:06:35.402242   72173 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 15:06:35.428958   72173 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 15:06:35.429016   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:35.429080   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-989166 minikube.k8s.io/updated_at=2024_10_14T15_06_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=embed-certs-989166 minikube.k8s.io/primary=true
	I1014 15:06:35.475775   72173 ops.go:34] apiserver oom_adj: -16
	I1014 15:06:35.645234   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:36.145613   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:36.646197   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:37.145401   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:37.645956   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:38.145978   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:38.645292   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:39.145444   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:39.646019   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:39.869659   72173 kubeadm.go:1113] duration metric: took 4.440701402s to wait for elevateKubeSystemPrivileges
	I1014 15:06:39.869695   72173 kubeadm.go:394] duration metric: took 5m1.76989803s to StartCluster
	I1014 15:06:39.869713   72173 settings.go:142] acquiring lock: {Name:mk9f6f6b9dc8c3435472077cbb9091b8e648d1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:06:39.869797   72173 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 15:06:39.872564   72173 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/kubeconfig: {Name:mk17dd47b52fd7a8ee17f563c35b22ef1b7788f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:06:39.872947   72173 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 15:06:39.873165   72173 config.go:182] Loaded profile config "embed-certs-989166": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 15:06:39.873085   72173 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 15:06:39.873246   72173 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-989166"
	I1014 15:06:39.873256   72173 addons.go:69] Setting metrics-server=true in profile "embed-certs-989166"
	I1014 15:06:39.873273   72173 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-989166"
	I1014 15:06:39.873272   72173 addons.go:69] Setting default-storageclass=true in profile "embed-certs-989166"
	I1014 15:06:39.873319   72173 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-989166"
	W1014 15:06:39.873282   72173 addons.go:243] addon storage-provisioner should already be in state true
	I1014 15:06:39.873417   72173 host.go:66] Checking if "embed-certs-989166" exists ...
	I1014 15:06:39.873282   72173 addons.go:234] Setting addon metrics-server=true in "embed-certs-989166"
	W1014 15:06:39.873476   72173 addons.go:243] addon metrics-server should already be in state true
	I1014 15:06:39.873504   72173 host.go:66] Checking if "embed-certs-989166" exists ...
	I1014 15:06:39.873875   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.873888   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.873920   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.873947   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.873986   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.874050   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.874921   72173 out.go:177] * Verifying Kubernetes components...
	I1014 15:06:39.876972   72173 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:06:39.893341   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41669
	I1014 15:06:39.893367   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41843
	I1014 15:06:39.893341   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39139
	I1014 15:06:39.893905   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.893915   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.894023   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.894471   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.894493   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.894651   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.894677   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.894713   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.894731   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.894942   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.895073   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.895563   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.895593   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.895778   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.895970   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetState
	I1014 15:06:39.896249   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.896293   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.899661   72173 addons.go:234] Setting addon default-storageclass=true in "embed-certs-989166"
	W1014 15:06:39.899685   72173 addons.go:243] addon default-storageclass should already be in state true
	I1014 15:06:39.899714   72173 host.go:66] Checking if "embed-certs-989166" exists ...
	I1014 15:06:39.900088   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.900131   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.912591   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39805
	I1014 15:06:39.913089   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.913630   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.913652   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.914099   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.914287   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetState
	I1014 15:06:39.914839   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39111
	I1014 15:06:39.915288   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.915783   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.915802   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.916147   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.916171   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:06:39.916382   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetState
	I1014 15:06:39.917766   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:06:39.917796   72173 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:06:39.919192   72173 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1014 15:06:35.567508   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:38.065792   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:40.066618   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:39.919297   72173 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 15:06:39.919320   72173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 15:06:39.919339   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:06:39.920468   72173 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1014 15:06:39.920489   72173 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1014 15:06:39.920507   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:06:39.921603   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45255
	I1014 15:06:39.921970   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.922502   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.922525   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.922994   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.923333   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:06:39.923585   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.923627   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.923826   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:06:39.923846   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:06:39.923876   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:06:39.924028   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:06:39.924157   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:06:39.924270   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:06:39.924291   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:06:39.924310   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:06:39.924397   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:06:39.924674   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:06:39.924840   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:06:39.925027   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:06:39.925201   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:06:39.945435   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40911
	I1014 15:06:39.945958   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.946468   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.946497   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.946855   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.947023   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetState
	I1014 15:06:39.948734   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:06:39.948924   72173 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 15:06:39.948942   72173 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 15:06:39.948966   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:06:39.951019   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:06:39.951418   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:06:39.951437   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:06:39.951570   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:06:39.951742   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:06:39.951918   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:06:39.952058   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:06:40.129893   72173 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:06:40.215427   72173 node_ready.go:35] waiting up to 6m0s for node "embed-certs-989166" to be "Ready" ...
	I1014 15:06:40.224710   72173 node_ready.go:49] node "embed-certs-989166" has status "Ready":"True"
	I1014 15:06:40.224731   72173 node_ready.go:38] duration metric: took 9.266994ms for node "embed-certs-989166" to be "Ready" ...
	I1014 15:06:40.224742   72173 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:06:40.230651   72173 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:40.394829   72173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 15:06:40.422573   72173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 15:06:40.430300   72173 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1014 15:06:40.430319   72173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1014 15:06:40.503826   72173 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1014 15:06:40.503857   72173 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1014 15:06:40.586087   72173 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 15:06:40.586116   72173 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1014 15:06:40.726605   72173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 15:06:40.887453   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:40.887475   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:40.887809   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Closing plugin on server side
	I1014 15:06:40.887857   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:40.887869   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:40.887886   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:40.887898   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:40.888127   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:40.888150   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:40.888160   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Closing plugin on server side
	I1014 15:06:40.901694   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:40.901717   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:40.902091   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:40.902103   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Closing plugin on server side
	I1014 15:06:40.902111   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:41.352636   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:41.352670   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:41.352963   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Closing plugin on server side
	I1014 15:06:41.353017   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:41.353029   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:41.353036   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:41.353043   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:41.353274   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:41.353302   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:41.578200   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:41.578219   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:41.578484   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:41.578529   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:41.578554   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:41.578588   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:41.578827   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:41.578844   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:41.578854   72173 addons.go:475] Verifying addon metrics-server=true in "embed-certs-989166"
	I1014 15:06:41.581312   72173 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1014 15:06:41.582506   72173 addons.go:510] duration metric: took 1.709432803s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1014 15:06:42.237265   72173 pod_ready.go:103] pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:44.240605   72173 pod_ready.go:103] pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:42.067701   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:44.566134   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:46.738094   72173 pod_ready.go:103] pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:48.739238   72173 pod_ready.go:103] pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:49.238145   72173 pod_ready.go:93] pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:49.238167   72173 pod_ready.go:82] duration metric: took 9.007493385s for pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.238176   72173 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-l95hj" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.243268   72173 pod_ready.go:93] pod "coredns-7c65d6cfc9-l95hj" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:49.243299   72173 pod_ready.go:82] duration metric: took 5.116183ms for pod "coredns-7c65d6cfc9-l95hj" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.243311   72173 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.247979   72173 pod_ready.go:93] pod "etcd-embed-certs-989166" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:49.248001   72173 pod_ready.go:82] duration metric: took 4.682826ms for pod "etcd-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.248009   72173 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.252590   72173 pod_ready.go:93] pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:49.252615   72173 pod_ready.go:82] duration metric: took 4.599399ms for pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.252624   72173 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.257541   72173 pod_ready.go:93] pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:49.257566   72173 pod_ready.go:82] duration metric: took 4.935116ms for pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.257575   72173 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g572s" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:47.064934   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:49.066284   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:49.635873   72173 pod_ready.go:93] pod "kube-proxy-g572s" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:49.635895   72173 pod_ready.go:82] duration metric: took 378.313947ms for pod "kube-proxy-g572s" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.635904   72173 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:50.035141   72173 pod_ready.go:93] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:50.035169   72173 pod_ready.go:82] duration metric: took 399.257073ms for pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:50.035179   72173 pod_ready.go:39] duration metric: took 9.810424567s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:06:50.035195   72173 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:06:50.035258   72173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:50.054964   72173 api_server.go:72] duration metric: took 10.181978114s to wait for apiserver process to appear ...
	I1014 15:06:50.054996   72173 api_server.go:88] waiting for apiserver healthz status ...
	I1014 15:06:50.055020   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:06:50.061606   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 200:
	ok
	I1014 15:06:50.063380   72173 api_server.go:141] control plane version: v1.31.1
	I1014 15:06:50.063411   72173 api_server.go:131] duration metric: took 8.40661ms to wait for apiserver health ...
	I1014 15:06:50.063421   72173 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 15:06:50.239258   72173 system_pods.go:59] 9 kube-system pods found
	I1014 15:06:50.239286   72173 system_pods.go:61] "coredns-7c65d6cfc9-6bmwg" [7cf9ad75-b75b-4cce-aad8-d68a810a5d0a] Running
	I1014 15:06:50.239292   72173 system_pods.go:61] "coredns-7c65d6cfc9-l95hj" [6563de05-ef49-4fa9-bf0b-a826fbc8bb14] Running
	I1014 15:06:50.239295   72173 system_pods.go:61] "etcd-embed-certs-989166" [8d29b784-a336-4cb9-ac24-3e9e129e4f49] Running
	I1014 15:06:50.239299   72173 system_pods.go:61] "kube-apiserver-embed-certs-989166" [a98c0a3d-0fd7-4f02-8d61-93f8cada740e] Running
	I1014 15:06:50.239303   72173 system_pods.go:61] "kube-controller-manager-embed-certs-989166" [e3146331-cd59-4a34-8ca8-c9637acdb687] Running
	I1014 15:06:50.239305   72173 system_pods.go:61] "kube-proxy-g572s" [5d2e4a08-5d05-48ab-8fbe-3bb3fe2f77ab] Running
	I1014 15:06:50.239308   72173 system_pods.go:61] "kube-scheduler-embed-certs-989166" [fd61dc8f-51aa-43ce-8e6b-8be0c50073fc] Running
	I1014 15:06:50.239314   72173 system_pods.go:61] "metrics-server-6867b74b74-jl6pp" [c244e53d-c492-426a-be7f-d405f2defd17] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:06:50.239317   72173 system_pods.go:61] "storage-provisioner" [ad6caa59-bc75-4e8f-8052-86d963b92fe3] Running
	I1014 15:06:50.239325   72173 system_pods.go:74] duration metric: took 175.89649ms to wait for pod list to return data ...
	I1014 15:06:50.239334   72173 default_sa.go:34] waiting for default service account to be created ...
	I1014 15:06:50.435980   72173 default_sa.go:45] found service account: "default"
	I1014 15:06:50.436007   72173 default_sa.go:55] duration metric: took 196.667838ms for default service account to be created ...
	I1014 15:06:50.436017   72173 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 15:06:50.639185   72173 system_pods.go:86] 9 kube-system pods found
	I1014 15:06:50.639224   72173 system_pods.go:89] "coredns-7c65d6cfc9-6bmwg" [7cf9ad75-b75b-4cce-aad8-d68a810a5d0a] Running
	I1014 15:06:50.639234   72173 system_pods.go:89] "coredns-7c65d6cfc9-l95hj" [6563de05-ef49-4fa9-bf0b-a826fbc8bb14] Running
	I1014 15:06:50.639241   72173 system_pods.go:89] "etcd-embed-certs-989166" [8d29b784-a336-4cb9-ac24-3e9e129e4f49] Running
	I1014 15:06:50.639248   72173 system_pods.go:89] "kube-apiserver-embed-certs-989166" [a98c0a3d-0fd7-4f02-8d61-93f8cada740e] Running
	I1014 15:06:50.639254   72173 system_pods.go:89] "kube-controller-manager-embed-certs-989166" [e3146331-cd59-4a34-8ca8-c9637acdb687] Running
	I1014 15:06:50.639262   72173 system_pods.go:89] "kube-proxy-g572s" [5d2e4a08-5d05-48ab-8fbe-3bb3fe2f77ab] Running
	I1014 15:06:50.639269   72173 system_pods.go:89] "kube-scheduler-embed-certs-989166" [fd61dc8f-51aa-43ce-8e6b-8be0c50073fc] Running
	I1014 15:06:50.639283   72173 system_pods.go:89] "metrics-server-6867b74b74-jl6pp" [c244e53d-c492-426a-be7f-d405f2defd17] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:06:50.639295   72173 system_pods.go:89] "storage-provisioner" [ad6caa59-bc75-4e8f-8052-86d963b92fe3] Running
	I1014 15:06:50.639309   72173 system_pods.go:126] duration metric: took 203.286322ms to wait for k8s-apps to be running ...
	I1014 15:06:50.639327   72173 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 15:06:50.639388   72173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:06:50.655377   72173 system_svc.go:56] duration metric: took 16.0447ms WaitForService to wait for kubelet
	I1014 15:06:50.655402   72173 kubeadm.go:582] duration metric: took 10.782421893s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 15:06:50.655425   72173 node_conditions.go:102] verifying NodePressure condition ...
	I1014 15:06:50.835507   72173 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 15:06:50.835543   72173 node_conditions.go:123] node cpu capacity is 2
	I1014 15:06:50.835556   72173 node_conditions.go:105] duration metric: took 180.126755ms to run NodePressure ...
	I1014 15:06:50.835570   72173 start.go:241] waiting for startup goroutines ...
	I1014 15:06:50.835580   72173 start.go:246] waiting for cluster config update ...
	I1014 15:06:50.835594   72173 start.go:255] writing updated cluster config ...
	I1014 15:06:50.835924   72173 ssh_runner.go:195] Run: rm -f paused
	I1014 15:06:50.883737   72173 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1014 15:06:50.886200   72173 out.go:177] * Done! kubectl is now configured to use "embed-certs-989166" cluster and "default" namespace by default
	I1014 15:06:51.066344   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:53.566466   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:56.066734   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:58.567007   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:01.066112   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:03.068758   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:05.566174   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:07.566274   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:09.566829   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:10.525694   72639 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1014 15:07:10.526665   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:07:10.526908   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:07:12.066402   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:13.560638   71679 pod_ready.go:82] duration metric: took 4m0.000980901s for pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace to be "Ready" ...
	E1014 15:07:13.560669   71679 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace to be "Ready" (will not retry!)
	I1014 15:07:13.560693   71679 pod_ready.go:39] duration metric: took 4m13.04495779s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:07:13.560725   71679 kubeadm.go:597] duration metric: took 4m21.006404411s to restartPrimaryControlPlane
	W1014 15:07:13.560791   71679 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1014 15:07:13.560823   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 15:07:15.527128   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:07:15.527376   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:07:25.527779   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:07:25.528060   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:07:39.775370   71679 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.214519412s)
	I1014 15:07:39.775448   71679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:07:39.790736   71679 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:07:39.800575   71679 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:07:39.810380   71679 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:07:39.810402   71679 kubeadm.go:157] found existing configuration files:
	
	I1014 15:07:39.810462   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:07:39.819880   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:07:39.819938   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:07:39.830542   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:07:39.840268   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:07:39.840318   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:07:39.849727   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:07:39.858513   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:07:39.858651   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:07:39.869154   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:07:39.878724   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:07:39.878798   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:07:39.888123   71679 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 15:07:39.942676   71679 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1014 15:07:39.942771   71679 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 15:07:40.060558   71679 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 15:07:40.060698   71679 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 15:07:40.060861   71679 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 15:07:40.076085   71679 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 15:07:40.078200   71679 out.go:235]   - Generating certificates and keys ...
	I1014 15:07:40.078301   71679 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 15:07:40.078381   71679 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 15:07:40.078505   71679 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 15:07:40.078620   71679 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1014 15:07:40.078717   71679 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 15:07:40.078794   71679 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1014 15:07:40.078887   71679 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1014 15:07:40.078973   71679 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1014 15:07:40.079069   71679 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 15:07:40.079161   71679 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 15:07:40.079234   71679 kubeadm.go:310] [certs] Using the existing "sa" key
	I1014 15:07:40.079315   71679 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 15:07:40.177082   71679 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 15:07:40.264965   71679 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 15:07:40.415660   71679 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 15:07:40.556759   71679 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 15:07:40.727152   71679 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 15:07:40.727573   71679 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 15:07:40.730409   71679 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 15:07:40.732204   71679 out.go:235]   - Booting up control plane ...
	I1014 15:07:40.732328   71679 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 15:07:40.732440   71679 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 15:07:40.732529   71679 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 15:07:40.751839   71679 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 15:07:40.758034   71679 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 15:07:40.758095   71679 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 15:07:40.895135   71679 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 15:07:40.895254   71679 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 15:07:41.397066   71679 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.194797ms
	I1014 15:07:41.397209   71679 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1014 15:07:46.401247   71679 kubeadm.go:310] [api-check] The API server is healthy after 5.002197966s
	I1014 15:07:46.419134   71679 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 15:07:46.433128   71679 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 15:07:46.477079   71679 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 15:07:46.477289   71679 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-813300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 15:07:46.492703   71679 kubeadm.go:310] [bootstrap-token] Using token: 1vsv04.mf3pqj2ow157sq8h
	I1014 15:07:46.494314   71679 out.go:235]   - Configuring RBAC rules ...
	I1014 15:07:46.494467   71679 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 15:07:46.501090   71679 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 15:07:46.515987   71679 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 15:07:46.522417   71679 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 15:07:46.528612   71679 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 15:07:46.536975   71679 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 15:07:46.810642   71679 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 15:07:47.240531   71679 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1014 15:07:47.810279   71679 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1014 15:07:47.811169   71679 kubeadm.go:310] 
	I1014 15:07:47.811230   71679 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1014 15:07:47.811238   71679 kubeadm.go:310] 
	I1014 15:07:47.811307   71679 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1014 15:07:47.811312   71679 kubeadm.go:310] 
	I1014 15:07:47.811335   71679 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1014 15:07:47.811388   71679 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 15:07:47.811440   71679 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 15:07:47.811447   71679 kubeadm.go:310] 
	I1014 15:07:47.811501   71679 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1014 15:07:47.811507   71679 kubeadm.go:310] 
	I1014 15:07:47.811546   71679 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 15:07:47.811553   71679 kubeadm.go:310] 
	I1014 15:07:47.811600   71679 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1014 15:07:47.811667   71679 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 15:07:47.811755   71679 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 15:07:47.811771   71679 kubeadm.go:310] 
	I1014 15:07:47.811844   71679 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 15:07:47.811912   71679 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1014 15:07:47.811921   71679 kubeadm.go:310] 
	I1014 15:07:47.811999   71679 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1vsv04.mf3pqj2ow157sq8h \
	I1014 15:07:47.812091   71679 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 \
	I1014 15:07:47.812139   71679 kubeadm.go:310] 	--control-plane 
	I1014 15:07:47.812153   71679 kubeadm.go:310] 
	I1014 15:07:47.812231   71679 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1014 15:07:47.812238   71679 kubeadm.go:310] 
	I1014 15:07:47.812306   71679 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1vsv04.mf3pqj2ow157sq8h \
	I1014 15:07:47.812393   71679 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 
	I1014 15:07:47.814071   71679 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 15:07:47.814103   71679 cni.go:84] Creating CNI manager for ""
	I1014 15:07:47.814113   71679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:07:47.816033   71679 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 15:07:45.528527   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:07:45.528768   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:07:47.817325   71679 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 15:07:47.829639   71679 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 15:07:47.847797   71679 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 15:07:47.847857   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:47.847929   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-813300 minikube.k8s.io/updated_at=2024_10_14T15_07_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=no-preload-813300 minikube.k8s.io/primary=true
	I1014 15:07:48.039959   71679 ops.go:34] apiserver oom_adj: -16
	I1014 15:07:48.040095   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:48.540295   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:49.040911   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:49.540233   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:50.040146   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:50.540494   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:51.041033   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:51.540516   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:52.040935   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:52.146854   71679 kubeadm.go:1113] duration metric: took 4.299055033s to wait for elevateKubeSystemPrivileges
	I1014 15:07:52.146890   71679 kubeadm.go:394] duration metric: took 4m59.642546726s to StartCluster
	I1014 15:07:52.146906   71679 settings.go:142] acquiring lock: {Name:mk9f6f6b9dc8c3435472077cbb9091b8e648d1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:07:52.146987   71679 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 15:07:52.148782   71679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/kubeconfig: {Name:mk17dd47b52fd7a8ee17f563c35b22ef1b7788f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:07:52.149067   71679 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.13 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 15:07:52.149168   71679 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 15:07:52.149303   71679 addons.go:69] Setting storage-provisioner=true in profile "no-preload-813300"
	I1014 15:07:52.149333   71679 addons.go:234] Setting addon storage-provisioner=true in "no-preload-813300"
	I1014 15:07:52.149342   71679 config.go:182] Loaded profile config "no-preload-813300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	W1014 15:07:52.149355   71679 addons.go:243] addon storage-provisioner should already be in state true
	I1014 15:07:52.149378   71679 addons.go:69] Setting default-storageclass=true in profile "no-preload-813300"
	I1014 15:07:52.149390   71679 host.go:66] Checking if "no-preload-813300" exists ...
	I1014 15:07:52.149412   71679 addons.go:69] Setting metrics-server=true in profile "no-preload-813300"
	I1014 15:07:52.149447   71679 addons.go:234] Setting addon metrics-server=true in "no-preload-813300"
	W1014 15:07:52.149461   71679 addons.go:243] addon metrics-server should already be in state true
	I1014 15:07:52.149494   71679 host.go:66] Checking if "no-preload-813300" exists ...
	I1014 15:07:52.149421   71679 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-813300"
	I1014 15:07:52.149748   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.149789   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.149861   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.149890   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.149905   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.149928   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.150482   71679 out.go:177] * Verifying Kubernetes components...
	I1014 15:07:52.152252   71679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:07:52.167205   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38341
	I1014 15:07:52.170723   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45457
	I1014 15:07:52.170742   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.170728   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39829
	I1014 15:07:52.171111   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.171302   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.171321   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.171386   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.171678   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.171702   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.171717   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.171900   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.171916   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.172164   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.172243   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.172279   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.172325   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.172386   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetState
	I1014 15:07:52.172868   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.172916   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.175482   71679 addons.go:234] Setting addon default-storageclass=true in "no-preload-813300"
	W1014 15:07:52.175502   71679 addons.go:243] addon default-storageclass should already be in state true
	I1014 15:07:52.175529   71679 host.go:66] Checking if "no-preload-813300" exists ...
	I1014 15:07:52.175763   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.175792   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.190835   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46633
	I1014 15:07:52.191422   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.191767   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39613
	I1014 15:07:52.191901   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35293
	I1014 15:07:52.192010   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.192027   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.192317   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.192436   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.192481   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.192988   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.193010   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.192992   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.193060   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.193474   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.193524   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.193530   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.193563   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.193729   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetState
	I1014 15:07:52.193770   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetState
	I1014 15:07:52.195702   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:07:52.195770   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:07:52.197642   71679 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1014 15:07:52.197652   71679 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:07:52.198957   71679 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1014 15:07:52.198978   71679 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1014 15:07:52.198998   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:07:52.199075   71679 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 15:07:52.199096   71679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 15:07:52.199111   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:07:52.202637   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:07:52.203064   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:07:52.203088   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:07:52.203245   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:07:52.203515   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:07:52.203519   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:07:52.203663   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:07:52.203812   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:07:52.203878   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:07:52.203903   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:07:52.204187   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:07:52.204377   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:07:52.204535   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:07:52.204683   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:07:52.231332   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38395
	I1014 15:07:52.231813   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.232320   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.232344   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.232645   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.232836   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetState
	I1014 15:07:52.234309   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:07:52.234570   71679 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 15:07:52.234585   71679 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 15:07:52.234622   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:07:52.237749   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:07:52.238364   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:07:52.238393   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:07:52.238562   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:07:52.238744   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:07:52.238903   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:07:52.239031   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:07:52.375830   71679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:07:52.401606   71679 node_ready.go:35] waiting up to 6m0s for node "no-preload-813300" to be "Ready" ...
	I1014 15:07:52.431363   71679 node_ready.go:49] node "no-preload-813300" has status "Ready":"True"
	I1014 15:07:52.431393   71679 node_ready.go:38] duration metric: took 29.758277ms for node "no-preload-813300" to be "Ready" ...
	I1014 15:07:52.431405   71679 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:07:52.446747   71679 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fjzn8" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:52.501642   71679 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1014 15:07:52.501664   71679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1014 15:07:52.509733   71679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 15:07:52.515833   71679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 15:07:52.536485   71679 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1014 15:07:52.536508   71679 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1014 15:07:52.622269   71679 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 15:07:52.622299   71679 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1014 15:07:52.702873   71679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 15:07:52.909827   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:52.909865   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:52.910194   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:52.910209   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:52.910235   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:52.910249   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:52.910510   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:52.910525   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:52.918161   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:52.918182   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:52.918473   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:52.918493   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:52.918480   71679 main.go:141] libmachine: (no-preload-813300) DBG | Closing plugin on server side
	I1014 15:07:53.707659   71679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.191781585s)
	I1014 15:07:53.707706   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:53.707719   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:53.708011   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:53.708035   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:53.708052   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:53.708062   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:53.708330   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:53.708346   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:54.060665   71679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.357747934s)
	I1014 15:07:54.060752   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:54.060770   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:54.061069   71679 main.go:141] libmachine: (no-preload-813300) DBG | Closing plugin on server side
	I1014 15:07:54.061153   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:54.061164   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:54.061173   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:54.061184   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:54.062712   71679 main.go:141] libmachine: (no-preload-813300) DBG | Closing plugin on server side
	I1014 15:07:54.062787   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:54.062797   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:54.062811   71679 addons.go:475] Verifying addon metrics-server=true in "no-preload-813300"
	I1014 15:07:54.064762   71679 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1014 15:07:54.066623   71679 addons.go:510] duration metric: took 1.917465271s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1014 15:07:54.454216   71679 pod_ready.go:103] pod "coredns-7c65d6cfc9-fjzn8" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:56.455649   71679 pod_ready.go:93] pod "coredns-7c65d6cfc9-fjzn8" in "kube-system" namespace has status "Ready":"True"
	I1014 15:07:56.455674   71679 pod_ready.go:82] duration metric: took 4.00889709s for pod "coredns-7c65d6cfc9-fjzn8" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:56.455689   71679 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nvpvl" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:58.461687   71679 pod_ready.go:103] pod "coredns-7c65d6cfc9-nvpvl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:59.962360   71679 pod_ready.go:93] pod "coredns-7c65d6cfc9-nvpvl" in "kube-system" namespace has status "Ready":"True"
	I1014 15:07:59.962382   71679 pod_ready.go:82] duration metric: took 3.506686516s for pod "coredns-7c65d6cfc9-nvpvl" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.962391   71679 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.969241   71679 pod_ready.go:93] pod "etcd-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:07:59.969261   71679 pod_ready.go:82] duration metric: took 6.864356ms for pod "etcd-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.969270   71679 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.974810   71679 pod_ready.go:93] pod "kube-apiserver-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:07:59.974828   71679 pod_ready.go:82] duration metric: took 5.552122ms for pod "kube-apiserver-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.974837   71679 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.979555   71679 pod_ready.go:93] pod "kube-controller-manager-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:07:59.979580   71679 pod_ready.go:82] duration metric: took 4.735265ms for pod "kube-controller-manager-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.979592   71679 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-54rrd" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.985111   71679 pod_ready.go:93] pod "kube-proxy-54rrd" in "kube-system" namespace has status "Ready":"True"
	I1014 15:07:59.985138   71679 pod_ready.go:82] duration metric: took 5.538126ms for pod "kube-proxy-54rrd" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.985150   71679 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:08:00.359524   71679 pod_ready.go:93] pod "kube-scheduler-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:08:00.359548   71679 pod_ready.go:82] duration metric: took 374.389838ms for pod "kube-scheduler-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:08:00.359558   71679 pod_ready.go:39] duration metric: took 7.928141116s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:08:00.359575   71679 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:08:00.359626   71679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:08:00.376115   71679 api_server.go:72] duration metric: took 8.22700683s to wait for apiserver process to appear ...
	I1014 15:08:00.376144   71679 api_server.go:88] waiting for apiserver healthz status ...
	I1014 15:08:00.376169   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:08:00.381225   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 200:
	ok
	I1014 15:08:00.382348   71679 api_server.go:141] control plane version: v1.31.1
	I1014 15:08:00.382377   71679 api_server.go:131] duration metric: took 6.225832ms to wait for apiserver health ...
	I1014 15:08:00.382386   71679 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 15:08:00.563350   71679 system_pods.go:59] 9 kube-system pods found
	I1014 15:08:00.563382   71679 system_pods.go:61] "coredns-7c65d6cfc9-fjzn8" [7850936e-8104-4e8f-a4cc-948579963790] Running
	I1014 15:08:00.563386   71679 system_pods.go:61] "coredns-7c65d6cfc9-nvpvl" [d926987d-9c61-4bf6-83e3-97334715e1d5] Running
	I1014 15:08:00.563390   71679 system_pods.go:61] "etcd-no-preload-813300" [e5895ac5-7829-4d8c-b5be-d621dbba78bd] Running
	I1014 15:08:00.563394   71679 system_pods.go:61] "kube-apiserver-no-preload-813300" [a30389db-98c0-49e3-8a9b-f3414e62c09a] Running
	I1014 15:08:00.563399   71679 system_pods.go:61] "kube-controller-manager-no-preload-813300" [f710bd35-f215-4aa1-96a9-fb5be44d04cc] Running
	I1014 15:08:00.563402   71679 system_pods.go:61] "kube-proxy-54rrd" [0c8ab0de-c204-46f5-a725-5dcd9eff59d8] Running
	I1014 15:08:00.563405   71679 system_pods.go:61] "kube-scheduler-no-preload-813300" [5386153a-f569-4332-b448-2a000f7a16bb] Running
	I1014 15:08:00.563412   71679 system_pods.go:61] "metrics-server-6867b74b74-8vfll" [cf3594da-9896-49ed-b47f-5bbea36c9aaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:08:00.563416   71679 system_pods.go:61] "storage-provisioner" [2d79bfdf-bda5-42bf-8ddf-73d7df4855db] Running
	I1014 15:08:00.563424   71679 system_pods.go:74] duration metric: took 181.032852ms to wait for pod list to return data ...
	I1014 15:08:00.563436   71679 default_sa.go:34] waiting for default service account to be created ...
	I1014 15:08:00.760054   71679 default_sa.go:45] found service account: "default"
	I1014 15:08:00.760084   71679 default_sa.go:55] duration metric: took 196.637678ms for default service account to be created ...
	I1014 15:08:00.760095   71679 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 15:08:00.962545   71679 system_pods.go:86] 9 kube-system pods found
	I1014 15:08:00.962577   71679 system_pods.go:89] "coredns-7c65d6cfc9-fjzn8" [7850936e-8104-4e8f-a4cc-948579963790] Running
	I1014 15:08:00.962583   71679 system_pods.go:89] "coredns-7c65d6cfc9-nvpvl" [d926987d-9c61-4bf6-83e3-97334715e1d5] Running
	I1014 15:08:00.962587   71679 system_pods.go:89] "etcd-no-preload-813300" [e5895ac5-7829-4d8c-b5be-d621dbba78bd] Running
	I1014 15:08:00.962591   71679 system_pods.go:89] "kube-apiserver-no-preload-813300" [a30389db-98c0-49e3-8a9b-f3414e62c09a] Running
	I1014 15:08:00.962605   71679 system_pods.go:89] "kube-controller-manager-no-preload-813300" [f710bd35-f215-4aa1-96a9-fb5be44d04cc] Running
	I1014 15:08:00.962609   71679 system_pods.go:89] "kube-proxy-54rrd" [0c8ab0de-c204-46f5-a725-5dcd9eff59d8] Running
	I1014 15:08:00.962613   71679 system_pods.go:89] "kube-scheduler-no-preload-813300" [5386153a-f569-4332-b448-2a000f7a16bb] Running
	I1014 15:08:00.962619   71679 system_pods.go:89] "metrics-server-6867b74b74-8vfll" [cf3594da-9896-49ed-b47f-5bbea36c9aaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:08:00.962623   71679 system_pods.go:89] "storage-provisioner" [2d79bfdf-bda5-42bf-8ddf-73d7df4855db] Running
	I1014 15:08:00.962633   71679 system_pods.go:126] duration metric: took 202.532202ms to wait for k8s-apps to be running ...
	I1014 15:08:00.962640   71679 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 15:08:00.962682   71679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:08:00.980272   71679 system_svc.go:56] duration metric: took 17.624381ms WaitForService to wait for kubelet
	I1014 15:08:00.980310   71679 kubeadm.go:582] duration metric: took 8.831207019s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 15:08:00.980333   71679 node_conditions.go:102] verifying NodePressure condition ...
	I1014 15:08:01.160914   71679 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 15:08:01.160947   71679 node_conditions.go:123] node cpu capacity is 2
	I1014 15:08:01.160961   71679 node_conditions.go:105] duration metric: took 180.622279ms to run NodePressure ...
	I1014 15:08:01.160976   71679 start.go:241] waiting for startup goroutines ...
	I1014 15:08:01.160985   71679 start.go:246] waiting for cluster config update ...
	I1014 15:08:01.161000   71679 start.go:255] writing updated cluster config ...
	I1014 15:08:01.161357   71679 ssh_runner.go:195] Run: rm -f paused
	I1014 15:08:01.212486   71679 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1014 15:08:01.215083   71679 out.go:177] * Done! kubectl is now configured to use "no-preload-813300" cluster and "default" namespace by default
	I1014 15:08:25.530669   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:08:25.530970   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:08:25.530998   72639 kubeadm.go:310] 
	I1014 15:08:25.531059   72639 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1014 15:08:25.531114   72639 kubeadm.go:310] 		timed out waiting for the condition
	I1014 15:08:25.531125   72639 kubeadm.go:310] 
	I1014 15:08:25.531177   72639 kubeadm.go:310] 	This error is likely caused by:
	I1014 15:08:25.531238   72639 kubeadm.go:310] 		- The kubelet is not running
	I1014 15:08:25.531381   72639 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1014 15:08:25.531392   72639 kubeadm.go:310] 
	I1014 15:08:25.531527   72639 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1014 15:08:25.531587   72639 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1014 15:08:25.531633   72639 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1014 15:08:25.531643   72639 kubeadm.go:310] 
	I1014 15:08:25.531766   72639 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1014 15:08:25.531872   72639 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 15:08:25.531891   72639 kubeadm.go:310] 
	I1014 15:08:25.532038   72639 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1014 15:08:25.532174   72639 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 15:08:25.532281   72639 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1014 15:08:25.532377   72639 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1014 15:08:25.532418   72639 kubeadm.go:310] 
	I1014 15:08:25.532543   72639 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 15:08:25.532640   72639 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1014 15:08:25.532742   72639 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1014 15:08:25.532833   72639 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1014 15:08:25.532870   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 15:08:31.003635   72639 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.470741012s)
	I1014 15:08:31.003724   72639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:08:31.018666   72639 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:08:31.029707   72639 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:08:31.029729   72639 kubeadm.go:157] found existing configuration files:
	
	I1014 15:08:31.029776   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:08:31.039554   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:08:31.039625   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:08:31.049748   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:08:31.059618   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:08:31.059682   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:08:31.069369   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:08:31.078321   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:08:31.078385   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:08:31.088006   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:08:31.096681   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:08:31.096742   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:08:31.106269   72639 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 15:08:31.182768   72639 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1014 15:08:31.182833   72639 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 15:08:31.341660   72639 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 15:08:31.341833   72639 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 15:08:31.342008   72639 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1014 15:08:31.538731   72639 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 15:08:31.540933   72639 out.go:235]   - Generating certificates and keys ...
	I1014 15:08:31.541037   72639 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 15:08:31.541124   72639 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 15:08:31.541270   72639 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 15:08:31.541386   72639 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1014 15:08:31.541486   72639 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 15:08:31.541559   72639 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1014 15:08:31.541663   72639 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1014 15:08:31.541750   72639 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1014 15:08:31.542000   72639 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 15:08:31.542534   72639 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 15:08:31.542627   72639 kubeadm.go:310] [certs] Using the existing "sa" key
	I1014 15:08:31.542711   72639 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 15:08:31.847005   72639 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 15:08:32.049586   72639 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 15:08:32.355652   72639 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 15:08:32.511031   72639 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 15:08:32.526310   72639 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 15:08:32.526755   72639 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 15:08:32.526841   72639 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 15:08:32.665898   72639 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 15:08:32.667688   72639 out.go:235]   - Booting up control plane ...
	I1014 15:08:32.667806   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 15:08:32.681232   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 15:08:32.682929   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 15:08:32.683704   72639 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 15:08:32.685936   72639 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1014 15:09:12.687998   72639 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1014 15:09:12.688248   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:09:12.688517   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:09:17.689026   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:09:17.689213   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:09:27.689821   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:09:27.690119   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:09:47.690936   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:09:47.691185   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:10:27.691438   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:10:27.691721   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:10:27.691744   72639 kubeadm.go:310] 
	I1014 15:10:27.691779   72639 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1014 15:10:27.691847   72639 kubeadm.go:310] 		timed out waiting for the condition
	I1014 15:10:27.691867   72639 kubeadm.go:310] 
	I1014 15:10:27.691907   72639 kubeadm.go:310] 	This error is likely caused by:
	I1014 15:10:27.691972   72639 kubeadm.go:310] 		- The kubelet is not running
	I1014 15:10:27.692124   72639 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1014 15:10:27.692136   72639 kubeadm.go:310] 
	I1014 15:10:27.692253   72639 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1014 15:10:27.692311   72639 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1014 15:10:27.692352   72639 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1014 15:10:27.692363   72639 kubeadm.go:310] 
	I1014 15:10:27.692497   72639 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1014 15:10:27.692617   72639 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 15:10:27.692633   72639 kubeadm.go:310] 
	I1014 15:10:27.692787   72639 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1014 15:10:27.692915   72639 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 15:10:27.693051   72639 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1014 15:10:27.693146   72639 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1014 15:10:27.693158   72639 kubeadm.go:310] 
	I1014 15:10:27.693497   72639 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 15:10:27.693627   72639 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1014 15:10:27.693710   72639 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1014 15:10:27.693770   72639 kubeadm.go:394] duration metric: took 8m7.905137486s to StartCluster
	I1014 15:10:27.693808   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:10:27.693863   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:10:27.735373   72639 cri.go:89] found id: ""
	I1014 15:10:27.735410   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.735419   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:10:27.735425   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:10:27.735484   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:10:27.775691   72639 cri.go:89] found id: ""
	I1014 15:10:27.775713   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.775721   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:10:27.775727   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:10:27.775778   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:10:27.811621   72639 cri.go:89] found id: ""
	I1014 15:10:27.811645   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.811653   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:10:27.811658   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:10:27.811718   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:10:27.850894   72639 cri.go:89] found id: ""
	I1014 15:10:27.850917   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.850925   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:10:27.850931   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:10:27.850979   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:10:27.891559   72639 cri.go:89] found id: ""
	I1014 15:10:27.891596   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.891608   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:10:27.891616   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:10:27.891671   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:10:27.929896   72639 cri.go:89] found id: ""
	I1014 15:10:27.929929   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.929942   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:10:27.930002   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:10:27.930096   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:10:27.964801   72639 cri.go:89] found id: ""
	I1014 15:10:27.964828   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.964839   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:10:27.964845   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:10:27.964905   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:10:28.011737   72639 cri.go:89] found id: ""
	I1014 15:10:28.011761   72639 logs.go:282] 0 containers: []
	W1014 15:10:28.011769   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:10:28.011777   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:10:28.011788   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:10:28.088053   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:10:28.088082   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:10:28.088098   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:10:28.214495   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:10:28.214531   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:10:28.254766   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:10:28.254796   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:10:28.304942   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:10:28.304977   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1014 15:10:28.319674   72639 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1014 15:10:28.319729   72639 out.go:270] * 
	W1014 15:10:28.319783   72639 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 15:10:28.319802   72639 out.go:270] * 
	W1014 15:10:28.320716   72639 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 15:10:28.324551   72639 out.go:201] 
	W1014 15:10:28.325905   72639 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 15:10:28.325940   72639 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1014 15:10:28.325985   72639 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1014 15:10:28.327473   72639 out.go:201] 
	
	
	==> CRI-O <==
	Oct 14 15:10:30 old-k8s-version-399767 crio[635]: time="2024-10-14 15:10:30.144608785Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918630144581423,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f87b6b94-315f-4cc5-a4e1-d4e17100d099 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:10:30 old-k8s-version-399767 crio[635]: time="2024-10-14 15:10:30.145194886Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=70b45ebd-efe4-411b-993c-f2d5e8826fa2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:10:30 old-k8s-version-399767 crio[635]: time="2024-10-14 15:10:30.145264054Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=70b45ebd-efe4-411b-993c-f2d5e8826fa2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:10:30 old-k8s-version-399767 crio[635]: time="2024-10-14 15:10:30.145337950Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=70b45ebd-efe4-411b-993c-f2d5e8826fa2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:10:30 old-k8s-version-399767 crio[635]: time="2024-10-14 15:10:30.179380462Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e18edd49-06db-49c6-b742-65e8a5696c5f name=/runtime.v1.RuntimeService/Version
	Oct 14 15:10:30 old-k8s-version-399767 crio[635]: time="2024-10-14 15:10:30.179497508Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e18edd49-06db-49c6-b742-65e8a5696c5f name=/runtime.v1.RuntimeService/Version
	Oct 14 15:10:30 old-k8s-version-399767 crio[635]: time="2024-10-14 15:10:30.181814207Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ed4b76a3-4ab5-458c-a0aa-9b61e91efa17 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:10:30 old-k8s-version-399767 crio[635]: time="2024-10-14 15:10:30.182227012Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918630182207690,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ed4b76a3-4ab5-458c-a0aa-9b61e91efa17 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:10:30 old-k8s-version-399767 crio[635]: time="2024-10-14 15:10:30.183100439Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c9c6366f-d4d6-4441-9591-1bcba0b7acc8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:10:30 old-k8s-version-399767 crio[635]: time="2024-10-14 15:10:30.183167740Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c9c6366f-d4d6-4441-9591-1bcba0b7acc8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:10:30 old-k8s-version-399767 crio[635]: time="2024-10-14 15:10:30.183199864Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c9c6366f-d4d6-4441-9591-1bcba0b7acc8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:10:30 old-k8s-version-399767 crio[635]: time="2024-10-14 15:10:30.217408963Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7bbcf5e2-2240-4b3d-8628-b7b86575021a name=/runtime.v1.RuntimeService/Version
	Oct 14 15:10:30 old-k8s-version-399767 crio[635]: time="2024-10-14 15:10:30.217498393Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7bbcf5e2-2240-4b3d-8628-b7b86575021a name=/runtime.v1.RuntimeService/Version
	Oct 14 15:10:30 old-k8s-version-399767 crio[635]: time="2024-10-14 15:10:30.218866074Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ba3532ea-cabe-44e1-9428-ad1a85f8ed14 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:10:30 old-k8s-version-399767 crio[635]: time="2024-10-14 15:10:30.219469006Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918630219442329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ba3532ea-cabe-44e1-9428-ad1a85f8ed14 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:10:30 old-k8s-version-399767 crio[635]: time="2024-10-14 15:10:30.220076896Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1c4e1381-ded5-407a-9ff3-77b5df6c7b4c name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:10:30 old-k8s-version-399767 crio[635]: time="2024-10-14 15:10:30.220135574Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1c4e1381-ded5-407a-9ff3-77b5df6c7b4c name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:10:30 old-k8s-version-399767 crio[635]: time="2024-10-14 15:10:30.220166739Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1c4e1381-ded5-407a-9ff3-77b5df6c7b4c name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:10:30 old-k8s-version-399767 crio[635]: time="2024-10-14 15:10:30.255855306Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5fff3d62-85fd-4902-9c15-fa1d92ea65a3 name=/runtime.v1.RuntimeService/Version
	Oct 14 15:10:30 old-k8s-version-399767 crio[635]: time="2024-10-14 15:10:30.255929141Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5fff3d62-85fd-4902-9c15-fa1d92ea65a3 name=/runtime.v1.RuntimeService/Version
	Oct 14 15:10:30 old-k8s-version-399767 crio[635]: time="2024-10-14 15:10:30.256856261Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2b64671d-86d6-4ece-9187-fd3cd6b4ef1e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:10:30 old-k8s-version-399767 crio[635]: time="2024-10-14 15:10:30.257186749Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918630257167631,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2b64671d-86d6-4ece-9187-fd3cd6b4ef1e name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:10:30 old-k8s-version-399767 crio[635]: time="2024-10-14 15:10:30.257942110Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f5cea1b8-ae5c-414c-a692-f6d357a47a2e name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:10:30 old-k8s-version-399767 crio[635]: time="2024-10-14 15:10:30.258039379Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f5cea1b8-ae5c-414c-a692-f6d357a47a2e name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:10:30 old-k8s-version-399767 crio[635]: time="2024-10-14 15:10:30.258088872Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=f5cea1b8-ae5c-414c-a692-f6d357a47a2e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct14 15:01] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052051] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.050116] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Oct14 15:02] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.605075] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.701901] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.221397] systemd-fstab-generator[560]: Ignoring "noauto" option for root device
	[  +0.058897] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064336] systemd-fstab-generator[573]: Ignoring "noauto" option for root device
	[  +0.225460] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.166157] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.271984] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +6.642881] systemd-fstab-generator[879]: Ignoring "noauto" option for root device
	[  +0.070885] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.471808] systemd-fstab-generator[1003]: Ignoring "noauto" option for root device
	[ +13.079512] kauditd_printk_skb: 46 callbacks suppressed
	[Oct14 15:06] systemd-fstab-generator[5074]: Ignoring "noauto" option for root device
	[Oct14 15:08] systemd-fstab-generator[5361]: Ignoring "noauto" option for root device
	[  +0.073672] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 15:10:30 up 8 min,  0 users,  load average: 0.01, 0.05, 0.01
	Linux old-k8s-version-399767 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 14 15:10:27 old-k8s-version-399767 kubelet[5542]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001000c0, 0xc000b6c7e0)
	Oct 14 15:10:27 old-k8s-version-399767 kubelet[5542]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Oct 14 15:10:27 old-k8s-version-399767 kubelet[5542]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Oct 14 15:10:27 old-k8s-version-399767 kubelet[5542]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Oct 14 15:10:27 old-k8s-version-399767 kubelet[5542]: goroutine 152 [select]:
	Oct 14 15:10:27 old-k8s-version-399767 kubelet[5542]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009b3ef0, 0x4f0ac20, 0xc000051040, 0x1, 0xc0001000c0)
	Oct 14 15:10:27 old-k8s-version-399767 kubelet[5542]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Oct 14 15:10:27 old-k8s-version-399767 kubelet[5542]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0002570a0, 0xc0001000c0)
	Oct 14 15:10:27 old-k8s-version-399767 kubelet[5542]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Oct 14 15:10:27 old-k8s-version-399767 kubelet[5542]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Oct 14 15:10:27 old-k8s-version-399767 kubelet[5542]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Oct 14 15:10:27 old-k8s-version-399767 kubelet[5542]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000b7a3d0, 0xc000a05600)
	Oct 14 15:10:27 old-k8s-version-399767 kubelet[5542]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Oct 14 15:10:27 old-k8s-version-399767 kubelet[5542]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Oct 14 15:10:27 old-k8s-version-399767 kubelet[5542]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Oct 14 15:10:27 old-k8s-version-399767 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 14 15:10:27 old-k8s-version-399767 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 14 15:10:28 old-k8s-version-399767 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Oct 14 15:10:28 old-k8s-version-399767 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 14 15:10:28 old-k8s-version-399767 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 14 15:10:28 old-k8s-version-399767 kubelet[5597]: I1014 15:10:28.182023    5597 server.go:416] Version: v1.20.0
	Oct 14 15:10:28 old-k8s-version-399767 kubelet[5597]: I1014 15:10:28.182503    5597 server.go:837] Client rotation is on, will bootstrap in background
	Oct 14 15:10:28 old-k8s-version-399767 kubelet[5597]: I1014 15:10:28.185470    5597 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 14 15:10:28 old-k8s-version-399767 kubelet[5597]: I1014 15:10:28.186886    5597 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Oct 14 15:10:28 old-k8s-version-399767 kubelet[5597]: W1014 15:10:28.186962    5597 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-399767 -n old-k8s-version-399767
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-399767 -n old-k8s-version-399767: exit status 2 (253.751881ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-399767" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (733.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-201291 -n default-k8s-diff-port-201291
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-14 15:15:34.384205891 +0000 UTC m=+5818.665554220
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-201291 -n default-k8s-diff-port-201291
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-201291 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-201291 logs -n 25: (1.987012101s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-517678 sudo cat                              | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-517678 sudo                                  | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-517678 sudo                                  | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-517678 sudo                                  | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-517678 sudo find                             | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-517678 sudo crio                             | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-517678                                       | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	| delete  | -p                                                     | disable-driver-mounts-887610 | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | disable-driver-mounts-887610                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:55 UTC |
	|         | default-k8s-diff-port-201291                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-813300             | no-preload-813300            | jenkins | v1.34.0 | 14 Oct 24 14:54 UTC | 14 Oct 24 14:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-813300                                   | no-preload-813300            | jenkins | v1.34.0 | 14 Oct 24 14:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-989166            | embed-certs-989166           | jenkins | v1.34.0 | 14 Oct 24 14:54 UTC | 14 Oct 24 14:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-989166                                  | embed-certs-989166           | jenkins | v1.34.0 | 14 Oct 24 14:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-201291  | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:55 UTC | 14 Oct 24 14:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:55 UTC |                     |
	|         | default-k8s-diff-port-201291                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-813300                  | no-preload-813300            | jenkins | v1.34.0 | 14 Oct 24 14:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-813300                                   | no-preload-813300            | jenkins | v1.34.0 | 14 Oct 24 14:56 UTC | 14 Oct 24 15:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-399767        | old-k8s-version-399767       | jenkins | v1.34.0 | 14 Oct 24 14:56 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-989166                 | embed-certs-989166           | jenkins | v1.34.0 | 14 Oct 24 14:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-989166                                  | embed-certs-989166           | jenkins | v1.34.0 | 14 Oct 24 14:57 UTC | 14 Oct 24 15:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-201291       | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:57 UTC | 14 Oct 24 15:06 UTC |
	|         | default-k8s-diff-port-201291                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-399767                              | old-k8s-version-399767       | jenkins | v1.34.0 | 14 Oct 24 14:58 UTC | 14 Oct 24 14:58 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-399767             | old-k8s-version-399767       | jenkins | v1.34.0 | 14 Oct 24 14:58 UTC | 14 Oct 24 14:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-399767                              | old-k8s-version-399767       | jenkins | v1.34.0 | 14 Oct 24 14:58 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 14:58:18
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 14:58:18.000027   72639 out.go:345] Setting OutFile to fd 1 ...
	I1014 14:58:18.000165   72639 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:58:18.000176   72639 out.go:358] Setting ErrFile to fd 2...
	I1014 14:58:18.000189   72639 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:58:18.000390   72639 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
	I1014 14:58:18.000911   72639 out.go:352] Setting JSON to false
	I1014 14:58:18.001828   72639 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6048,"bootTime":1728911850,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 14:58:18.001919   72639 start.go:139] virtualization: kvm guest
	I1014 14:58:18.004056   72639 out.go:177] * [old-k8s-version-399767] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1014 14:58:18.005382   72639 notify.go:220] Checking for updates...
	I1014 14:58:18.005437   72639 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 14:58:18.006939   72639 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 14:58:18.008275   72639 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 14:58:18.009565   72639 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 14:58:18.010773   72639 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 14:58:18.011941   72639 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 14:58:18.013472   72639 config.go:182] Loaded profile config "old-k8s-version-399767": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1014 14:58:18.013833   72639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:58:18.013892   72639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:58:18.028372   72639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44497
	I1014 14:58:18.028786   72639 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:58:18.029355   72639 main.go:141] libmachine: Using API Version  1
	I1014 14:58:18.029375   72639 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:58:18.029671   72639 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:58:18.029827   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 14:58:18.031644   72639 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1014 14:58:18.033229   72639 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 14:58:18.033524   72639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:58:18.033565   72639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:58:18.048210   72639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34273
	I1014 14:58:18.048620   72639 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:58:18.049080   72639 main.go:141] libmachine: Using API Version  1
	I1014 14:58:18.049102   72639 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:58:18.049377   72639 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:58:18.049550   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 14:58:18.084664   72639 out.go:177] * Using the kvm2 driver based on existing profile
	I1014 14:58:18.085942   72639 start.go:297] selected driver: kvm2
	I1014 14:58:18.085952   72639 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-399767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-399767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.138 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 14:58:18.086042   72639 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 14:58:18.086707   72639 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 14:58:18.086795   72639 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19790-7836/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 14:58:18.101802   72639 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1014 14:58:18.102194   72639 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 14:58:18.102224   72639 cni.go:84] Creating CNI manager for ""
	I1014 14:58:18.102263   72639 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 14:58:18.102315   72639 start.go:340] cluster config:
	{Name:old-k8s-version-399767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-399767 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.138 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 14:58:18.102441   72639 iso.go:125] acquiring lock: {Name:mk2e2e780b05ead4007a93e6b56d28c6081926bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 14:58:18.105418   72639 out.go:177] * Starting "old-k8s-version-399767" primary control-plane node in "old-k8s-version-399767" cluster
	I1014 14:58:16.182868   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:18.106656   72639 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1014 14:58:18.106696   72639 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1014 14:58:18.106708   72639 cache.go:56] Caching tarball of preloaded images
	I1014 14:58:18.106790   72639 preload.go:172] Found /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 14:58:18.106800   72639 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1014 14:58:18.106889   72639 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/config.json ...
	I1014 14:58:18.107063   72639 start.go:360] acquireMachinesLock for old-k8s-version-399767: {Name:mk0b928e3a7c606d1470b2064477a22c5e59969d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 14:58:22.262902   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:25.334877   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:31.414867   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:34.486863   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:40.566883   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:43.638929   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:49.718856   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:52.790946   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:58.870883   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:01.942844   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:08.022831   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:11.094893   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:17.174897   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:20.246818   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:26.326911   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:29.398852   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:35.478877   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:38.550829   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:44.630857   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:47.702856   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:53.782842   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:56.854890   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:02.934894   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:06.006879   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:12.086905   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:15.158856   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:21.238905   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:24.310889   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:30.390878   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:33.462909   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:39.542866   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:42.614929   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:48.694859   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:51.766865   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:57.846913   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:01:00.918859   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:01:06.998892   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:01:10.070810   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:01:13.075950   72173 start.go:364] duration metric: took 3m43.687804446s to acquireMachinesLock for "embed-certs-989166"
	I1014 15:01:13.076005   72173 start.go:96] Skipping create...Using existing machine configuration
	I1014 15:01:13.076011   72173 fix.go:54] fixHost starting: 
	I1014 15:01:13.076341   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:01:13.076386   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:01:13.092168   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41771
	I1014 15:01:13.092686   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:01:13.093180   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:01:13.093204   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:01:13.093560   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:01:13.093749   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:13.093889   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetState
	I1014 15:01:13.095639   72173 fix.go:112] recreateIfNeeded on embed-certs-989166: state=Stopped err=<nil>
	I1014 15:01:13.095665   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	W1014 15:01:13.095827   72173 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 15:01:13.097909   72173 out.go:177] * Restarting existing kvm2 VM for "embed-certs-989166" ...
	I1014 15:01:13.099253   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Start
	I1014 15:01:13.099433   72173 main.go:141] libmachine: (embed-certs-989166) Ensuring networks are active...
	I1014 15:01:13.100328   72173 main.go:141] libmachine: (embed-certs-989166) Ensuring network default is active
	I1014 15:01:13.100683   72173 main.go:141] libmachine: (embed-certs-989166) Ensuring network mk-embed-certs-989166 is active
	I1014 15:01:13.101062   72173 main.go:141] libmachine: (embed-certs-989166) Getting domain xml...
	I1014 15:01:13.101867   72173 main.go:141] libmachine: (embed-certs-989166) Creating domain...
	I1014 15:01:13.073323   71679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 15:01:13.073356   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetMachineName
	I1014 15:01:13.073658   71679 buildroot.go:166] provisioning hostname "no-preload-813300"
	I1014 15:01:13.073682   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetMachineName
	I1014 15:01:13.073854   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:01:13.075825   71679 machine.go:96] duration metric: took 4m37.425006s to provisionDockerMachine
	I1014 15:01:13.075866   71679 fix.go:56] duration metric: took 4m37.446829923s for fixHost
	I1014 15:01:13.075872   71679 start.go:83] releasing machines lock for "no-preload-813300", held for 4m37.446848059s
	W1014 15:01:13.075889   71679 start.go:714] error starting host: provision: host is not running
	W1014 15:01:13.075983   71679 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1014 15:01:13.075992   71679 start.go:729] Will try again in 5 seconds ...
	I1014 15:01:14.319338   72173 main.go:141] libmachine: (embed-certs-989166) Waiting to get IP...
	I1014 15:01:14.320167   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:14.320582   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:14.320651   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:14.320577   73268 retry.go:31] will retry after 213.073722ms: waiting for machine to come up
	I1014 15:01:14.534913   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:14.535353   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:14.535375   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:14.535306   73268 retry.go:31] will retry after 316.205029ms: waiting for machine to come up
	I1014 15:01:14.852769   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:14.853201   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:14.853261   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:14.853201   73268 retry.go:31] will retry after 399.414867ms: waiting for machine to come up
	I1014 15:01:15.253657   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:15.253955   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:15.253979   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:15.253917   73268 retry.go:31] will retry after 537.097034ms: waiting for machine to come up
	I1014 15:01:15.792362   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:15.792736   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:15.792763   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:15.792703   73268 retry.go:31] will retry after 594.582114ms: waiting for machine to come up
	I1014 15:01:16.388419   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:16.388838   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:16.388869   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:16.388793   73268 retry.go:31] will retry after 814.814512ms: waiting for machine to come up
	I1014 15:01:17.204791   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:17.205229   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:17.205255   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:17.205176   73268 retry.go:31] will retry after 971.673961ms: waiting for machine to come up
	I1014 15:01:18.178701   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:18.179100   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:18.179130   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:18.179048   73268 retry.go:31] will retry after 941.576822ms: waiting for machine to come up
	I1014 15:01:19.122097   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:19.122488   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:19.122514   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:19.122453   73268 retry.go:31] will retry after 1.5308999s: waiting for machine to come up
	I1014 15:01:18.077601   71679 start.go:360] acquireMachinesLock for no-preload-813300: {Name:mk0b928e3a7c606d1470b2064477a22c5e59969d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 15:01:20.655098   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:20.655524   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:20.655550   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:20.655475   73268 retry.go:31] will retry after 1.590510545s: waiting for machine to come up
	I1014 15:01:22.248128   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:22.248551   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:22.248572   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:22.248511   73268 retry.go:31] will retry after 1.965898839s: waiting for machine to come up
	I1014 15:01:24.215742   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:24.216187   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:24.216240   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:24.216136   73268 retry.go:31] will retry after 3.476459931s: waiting for machine to come up
	I1014 15:01:27.696804   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:27.697201   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:27.697254   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:27.697175   73268 retry.go:31] will retry after 3.212757582s: waiting for machine to come up
	I1014 15:01:32.235659   72390 start.go:364] duration metric: took 3m35.715993521s to acquireMachinesLock for "default-k8s-diff-port-201291"
	I1014 15:01:32.235710   72390 start.go:96] Skipping create...Using existing machine configuration
	I1014 15:01:32.235718   72390 fix.go:54] fixHost starting: 
	I1014 15:01:32.236084   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:01:32.236134   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:01:32.253294   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46045
	I1014 15:01:32.253760   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:01:32.254255   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:01:32.254275   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:01:32.254616   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:01:32.254797   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:32.254973   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetState
	I1014 15:01:32.256494   72390 fix.go:112] recreateIfNeeded on default-k8s-diff-port-201291: state=Stopped err=<nil>
	I1014 15:01:32.256523   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	W1014 15:01:32.256683   72390 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 15:01:32.258989   72390 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-201291" ...
	I1014 15:01:30.911781   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:30.912283   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has current primary IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:30.912313   72173 main.go:141] libmachine: (embed-certs-989166) Found IP for machine: 192.168.39.41
	I1014 15:01:30.912331   72173 main.go:141] libmachine: (embed-certs-989166) Reserving static IP address...
	I1014 15:01:30.912771   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "embed-certs-989166", mac: "52:54:00:ee:96:19", ip: "192.168.39.41"} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:30.912799   72173 main.go:141] libmachine: (embed-certs-989166) DBG | skip adding static IP to network mk-embed-certs-989166 - found existing host DHCP lease matching {name: "embed-certs-989166", mac: "52:54:00:ee:96:19", ip: "192.168.39.41"}
	I1014 15:01:30.912806   72173 main.go:141] libmachine: (embed-certs-989166) Reserved static IP address: 192.168.39.41
	I1014 15:01:30.912815   72173 main.go:141] libmachine: (embed-certs-989166) Waiting for SSH to be available...
	I1014 15:01:30.912822   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Getting to WaitForSSH function...
	I1014 15:01:30.914919   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:30.915273   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:30.915310   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:30.915392   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Using SSH client type: external
	I1014 15:01:30.915414   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa (-rw-------)
	I1014 15:01:30.915465   72173 main.go:141] libmachine: (embed-certs-989166) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.41 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 15:01:30.915489   72173 main.go:141] libmachine: (embed-certs-989166) DBG | About to run SSH command:
	I1014 15:01:30.915503   72173 main.go:141] libmachine: (embed-certs-989166) DBG | exit 0
	I1014 15:01:31.042620   72173 main.go:141] libmachine: (embed-certs-989166) DBG | SSH cmd err, output: <nil>: 
	I1014 15:01:31.043061   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetConfigRaw
	I1014 15:01:31.043675   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetIP
	I1014 15:01:31.046338   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.046679   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.046720   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.046941   72173 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/config.json ...
	I1014 15:01:31.047132   72173 machine.go:93] provisionDockerMachine start ...
	I1014 15:01:31.047149   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:31.047348   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.049453   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.049835   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.049857   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.049978   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:31.050147   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.050282   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.050419   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:31.050573   72173 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:31.050814   72173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1014 15:01:31.050828   72173 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 15:01:31.163270   72173 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 15:01:31.163306   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetMachineName
	I1014 15:01:31.163614   72173 buildroot.go:166] provisioning hostname "embed-certs-989166"
	I1014 15:01:31.163644   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetMachineName
	I1014 15:01:31.163821   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.166684   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.167009   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.167040   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.167157   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:31.167416   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.167582   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.167718   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:31.167857   72173 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:31.168025   72173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1014 15:01:31.168040   72173 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-989166 && echo "embed-certs-989166" | sudo tee /etc/hostname
	I1014 15:01:31.292369   72173 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-989166
	
	I1014 15:01:31.292405   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.295057   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.295425   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.295449   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.295713   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:31.295915   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.296076   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.296220   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:31.296395   72173 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:31.296552   72173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1014 15:01:31.296567   72173 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-989166' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-989166/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-989166' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 15:01:31.411080   72173 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 15:01:31.411112   72173 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 15:01:31.411131   72173 buildroot.go:174] setting up certificates
	I1014 15:01:31.411142   72173 provision.go:84] configureAuth start
	I1014 15:01:31.411150   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetMachineName
	I1014 15:01:31.411396   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetIP
	I1014 15:01:31.413972   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.414319   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.414341   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.414502   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.416775   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.417092   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.417113   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.417278   72173 provision.go:143] copyHostCerts
	I1014 15:01:31.417340   72173 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 15:01:31.417353   72173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 15:01:31.417437   72173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 15:01:31.417549   72173 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 15:01:31.417559   72173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 15:01:31.417600   72173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 15:01:31.417677   72173 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 15:01:31.417687   72173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 15:01:31.417721   72173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 15:01:31.417788   72173 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.embed-certs-989166 san=[127.0.0.1 192.168.39.41 embed-certs-989166 localhost minikube]
	I1014 15:01:31.599973   72173 provision.go:177] copyRemoteCerts
	I1014 15:01:31.600034   72173 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 15:01:31.600060   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.602964   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.603270   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.603296   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.603502   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:31.603665   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.603821   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:31.603949   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:01:31.688890   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 15:01:31.713474   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1014 15:01:31.737692   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 15:01:31.760955   72173 provision.go:87] duration metric: took 349.799595ms to configureAuth
	I1014 15:01:31.760986   72173 buildroot.go:189] setting minikube options for container-runtime
	I1014 15:01:31.761172   72173 config.go:182] Loaded profile config "embed-certs-989166": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 15:01:31.761244   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.763800   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.764149   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.764181   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.764339   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:31.764494   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.764636   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.764732   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:31.764852   72173 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:31.765002   72173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1014 15:01:31.765016   72173 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 15:01:31.992783   72173 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 15:01:31.992811   72173 machine.go:96] duration metric: took 945.667058ms to provisionDockerMachine
	I1014 15:01:31.992823   72173 start.go:293] postStartSetup for "embed-certs-989166" (driver="kvm2")
	I1014 15:01:31.992834   72173 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 15:01:31.992848   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:31.993203   72173 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 15:01:31.993235   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.995966   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.996380   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.996418   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.996538   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:31.996714   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.996864   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:31.997003   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:01:32.081931   72173 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 15:01:32.086191   72173 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 15:01:32.086218   72173 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 15:01:32.086287   72173 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 15:01:32.086368   72173 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 15:01:32.086455   72173 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 15:01:32.096414   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:01:32.120348   72173 start.go:296] duration metric: took 127.509685ms for postStartSetup
	I1014 15:01:32.120392   72173 fix.go:56] duration metric: took 19.044380323s for fixHost
	I1014 15:01:32.120412   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:32.123024   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.123435   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:32.123465   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.123649   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:32.123832   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:32.123986   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:32.124152   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:32.124288   72173 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:32.124487   72173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1014 15:01:32.124502   72173 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 15:01:32.235487   72173 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728918092.208431219
	
	I1014 15:01:32.235513   72173 fix.go:216] guest clock: 1728918092.208431219
	I1014 15:01:32.235522   72173 fix.go:229] Guest: 2024-10-14 15:01:32.208431219 +0000 UTC Remote: 2024-10-14 15:01:32.12039587 +0000 UTC m=+242.874215269 (delta=88.035349ms)
	I1014 15:01:32.235559   72173 fix.go:200] guest clock delta is within tolerance: 88.035349ms
	I1014 15:01:32.235572   72173 start.go:83] releasing machines lock for "embed-certs-989166", held for 19.159587089s
	I1014 15:01:32.235600   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:32.235877   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetIP
	I1014 15:01:32.238642   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.238995   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:32.239025   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.239175   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:32.239719   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:32.239891   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:32.239978   72173 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 15:01:32.240031   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:32.240091   72173 ssh_runner.go:195] Run: cat /version.json
	I1014 15:01:32.240115   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:32.242742   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.243102   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:32.243128   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.243177   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.243275   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:32.243482   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:32.243653   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:32.243664   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:32.243676   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.243811   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:32.243822   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:01:32.243929   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:32.244050   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:32.244168   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:01:32.357542   72173 ssh_runner.go:195] Run: systemctl --version
	I1014 15:01:32.365113   72173 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 15:01:32.510557   72173 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 15:01:32.516545   72173 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 15:01:32.516628   72173 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 15:01:32.533449   72173 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 15:01:32.533473   72173 start.go:495] detecting cgroup driver to use...
	I1014 15:01:32.533549   72173 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 15:01:32.549503   72173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 15:01:32.563126   72173 docker.go:217] disabling cri-docker service (if available) ...
	I1014 15:01:32.563184   72173 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 15:01:32.576972   72173 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 15:01:32.591047   72173 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 15:01:32.704839   72173 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 15:01:32.844770   72173 docker.go:233] disabling docker service ...
	I1014 15:01:32.844855   72173 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 15:01:32.859524   72173 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 15:01:32.872297   72173 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 15:01:33.014291   72173 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 15:01:33.136889   72173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 15:01:33.151656   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 15:01:33.170504   72173 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 15:01:33.170575   72173 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.180894   72173 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 15:01:33.180968   72173 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.192268   72173 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.203509   72173 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.215958   72173 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 15:01:33.227981   72173 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.241615   72173 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.261168   72173 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.273098   72173 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 15:01:33.284050   72173 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 15:01:33.284225   72173 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 15:01:33.299547   72173 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 15:01:33.310259   72173 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:01:33.426563   72173 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 15:01:33.526759   72173 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 15:01:33.526817   72173 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 15:01:33.532297   72173 start.go:563] Will wait 60s for crictl version
	I1014 15:01:33.532356   72173 ssh_runner.go:195] Run: which crictl
	I1014 15:01:33.536385   72173 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 15:01:33.576222   72173 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 15:01:33.576305   72173 ssh_runner.go:195] Run: crio --version
	I1014 15:01:33.604603   72173 ssh_runner.go:195] Run: crio --version
	I1014 15:01:33.636261   72173 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1014 15:01:33.637497   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetIP
	I1014 15:01:33.640450   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:33.640768   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:33.640806   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:33.641001   72173 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1014 15:01:33.645241   72173 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:01:33.658028   72173 kubeadm.go:883] updating cluster {Name:embed-certs-989166 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-989166 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 15:01:33.658181   72173 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 15:01:33.658246   72173 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:01:33.695188   72173 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1014 15:01:33.695261   72173 ssh_runner.go:195] Run: which lz4
	I1014 15:01:33.699735   72173 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 15:01:33.704540   72173 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 15:01:33.704576   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1014 15:01:32.260401   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Start
	I1014 15:01:32.260569   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Ensuring networks are active...
	I1014 15:01:32.261176   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Ensuring network default is active
	I1014 15:01:32.261498   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Ensuring network mk-default-k8s-diff-port-201291 is active
	I1014 15:01:32.261795   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Getting domain xml...
	I1014 15:01:32.262414   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Creating domain...
	I1014 15:01:33.520115   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting to get IP...
	I1014 15:01:33.521127   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:33.521518   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:33.521609   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:33.521520   73405 retry.go:31] will retry after 278.409801ms: waiting for machine to come up
	I1014 15:01:33.802289   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:33.802720   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:33.802744   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:33.802688   73405 retry.go:31] will retry after 362.923826ms: waiting for machine to come up
	I1014 15:01:34.167836   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:34.168228   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:34.168273   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:34.168163   73405 retry.go:31] will retry after 315.156371ms: waiting for machine to come up
	I1014 15:01:34.485445   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:34.485855   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:34.485876   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:34.485840   73405 retry.go:31] will retry after 573.46626ms: waiting for machine to come up
	I1014 15:01:35.061472   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:35.061997   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:35.062027   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:35.061965   73405 retry.go:31] will retry after 519.420022ms: waiting for machine to come up
	I1014 15:01:35.582694   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:35.583130   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:35.583155   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:35.583062   73405 retry.go:31] will retry after 661.055324ms: waiting for machine to come up
	I1014 15:01:36.245525   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:36.245876   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:36.245902   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:36.245834   73405 retry.go:31] will retry after 870.411428ms: waiting for machine to come up
	I1014 15:01:35.120269   72173 crio.go:462] duration metric: took 1.42058504s to copy over tarball
	I1014 15:01:35.120372   72173 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 15:01:37.206126   72173 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.08572724s)
	I1014 15:01:37.206168   72173 crio.go:469] duration metric: took 2.085859852s to extract the tarball
	I1014 15:01:37.206176   72173 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 15:01:37.243007   72173 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:01:37.289639   72173 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 15:01:37.289667   72173 cache_images.go:84] Images are preloaded, skipping loading
	I1014 15:01:37.289678   72173 kubeadm.go:934] updating node { 192.168.39.41 8443 v1.31.1 crio true true} ...
	I1014 15:01:37.289793   72173 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-989166 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.41
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-989166 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 15:01:37.289878   72173 ssh_runner.go:195] Run: crio config
	I1014 15:01:37.348641   72173 cni.go:84] Creating CNI manager for ""
	I1014 15:01:37.348665   72173 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:01:37.348684   72173 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 15:01:37.348711   72173 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.41 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-989166 NodeName:embed-certs-989166 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.41"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.41 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 15:01:37.348861   72173 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.41
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-989166"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.41"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.41"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 15:01:37.348925   72173 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 15:01:37.359204   72173 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 15:01:37.359272   72173 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 15:01:37.368810   72173 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1014 15:01:37.385402   72173 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 15:01:37.401828   72173 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1014 15:01:37.418811   72173 ssh_runner.go:195] Run: grep 192.168.39.41	control-plane.minikube.internal$ /etc/hosts
	I1014 15:01:37.422655   72173 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.41	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:01:37.434567   72173 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:01:37.561408   72173 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:01:37.579549   72173 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166 for IP: 192.168.39.41
	I1014 15:01:37.579577   72173 certs.go:194] generating shared ca certs ...
	I1014 15:01:37.579596   72173 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:01:37.579766   72173 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 15:01:37.579878   72173 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 15:01:37.579894   72173 certs.go:256] generating profile certs ...
	I1014 15:01:37.579998   72173 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/client.key
	I1014 15:01:37.580079   72173 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/apiserver.key.8939f8c2
	I1014 15:01:37.580148   72173 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/proxy-client.key
	I1014 15:01:37.580316   72173 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 15:01:37.580364   72173 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 15:01:37.580376   72173 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 15:01:37.580413   72173 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 15:01:37.580445   72173 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 15:01:37.580482   72173 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 15:01:37.580536   72173 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:01:37.581259   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 15:01:37.632130   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 15:01:37.678608   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 15:01:37.705377   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 15:01:37.731897   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1014 15:01:37.775043   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 15:01:37.801653   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 15:01:37.826547   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 15:01:37.852086   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 15:01:37.878715   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 15:01:37.905883   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 15:01:37.932458   72173 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 15:01:37.951362   72173 ssh_runner.go:195] Run: openssl version
	I1014 15:01:37.957730   72173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 15:01:37.969936   72173 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:01:37.974871   72173 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:01:37.974931   72173 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:01:37.981060   72173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 15:01:37.992086   72173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 15:01:38.003528   72173 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 15:01:38.008267   72173 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 15:01:38.008332   72173 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 15:01:38.014243   72173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 15:01:38.025272   72173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 15:01:38.036191   72173 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 15:01:38.040751   72173 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 15:01:38.040804   72173 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 15:01:38.046605   72173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 15:01:38.057815   72173 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 15:01:38.062497   72173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 15:01:38.068889   72173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 15:01:38.075278   72173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 15:01:38.081663   72173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 15:01:38.087892   72173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 15:01:38.093748   72173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 15:01:38.099807   72173 kubeadm.go:392] StartCluster: {Name:embed-certs-989166 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-989166 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 15:01:38.099912   72173 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 15:01:38.099960   72173 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:01:38.140896   72173 cri.go:89] found id: ""
	I1014 15:01:38.140973   72173 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 15:01:38.151443   72173 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1014 15:01:38.151462   72173 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1014 15:01:38.151512   72173 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 15:01:38.161419   72173 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 15:01:38.162357   72173 kubeconfig.go:125] found "embed-certs-989166" server: "https://192.168.39.41:8443"
	I1014 15:01:38.164328   72173 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 15:01:38.174731   72173 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.41
	I1014 15:01:38.174767   72173 kubeadm.go:1160] stopping kube-system containers ...
	I1014 15:01:38.174782   72173 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1014 15:01:38.174849   72173 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:01:38.214903   72173 cri.go:89] found id: ""
	I1014 15:01:38.214982   72173 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1014 15:01:38.232891   72173 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:01:38.242711   72173 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:01:38.242735   72173 kubeadm.go:157] found existing configuration files:
	
	I1014 15:01:38.242793   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:01:38.251939   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:01:38.252019   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:01:38.262108   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:01:38.271688   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:01:38.271751   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:01:38.281420   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:01:38.290693   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:01:38.290752   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:01:38.300205   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:01:38.309174   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:01:38.309236   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:01:38.318616   72173 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:01:38.328337   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:38.436297   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:37.118307   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:37.118744   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:37.118784   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:37.118706   73405 retry.go:31] will retry after 1.481454557s: waiting for machine to come up
	I1014 15:01:38.601780   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:38.602168   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:38.602212   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:38.602118   73405 retry.go:31] will retry after 1.22705177s: waiting for machine to come up
	I1014 15:01:39.831413   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:39.831889   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:39.831963   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:39.831838   73405 retry.go:31] will retry after 1.898722681s: waiting for machine to come up
	I1014 15:01:39.574410   72173 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.138075676s)
	I1014 15:01:39.574444   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:39.789417   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:39.873563   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:40.011579   72173 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:01:40.011673   72173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:01:40.511877   72173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:01:41.012608   72173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:01:41.512235   72173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:01:42.012435   72173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:01:42.047878   72173 api_server.go:72] duration metric: took 2.036298602s to wait for apiserver process to appear ...
	I1014 15:01:42.047909   72173 api_server.go:88] waiting for apiserver healthz status ...
	I1014 15:01:42.047935   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:01:44.298692   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 15:01:44.298726   72173 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 15:01:44.298743   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:01:44.317315   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 15:01:44.317353   72173 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 15:01:44.548651   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:01:44.559477   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:01:44.559513   72173 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:01:45.048060   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:01:45.057070   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:01:45.057099   72173 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:01:45.548344   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:01:45.552611   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:01:45.552640   72173 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:01:46.048314   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:01:46.054943   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 200:
	ok
	I1014 15:01:46.062740   72173 api_server.go:141] control plane version: v1.31.1
	I1014 15:01:46.062769   72173 api_server.go:131] duration metric: took 4.014851988s to wait for apiserver health ...
	I1014 15:01:46.062779   72173 cni.go:84] Creating CNI manager for ""
	I1014 15:01:46.062785   72173 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:01:46.064824   72173 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 15:01:41.731928   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:41.732483   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:41.732515   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:41.732435   73405 retry.go:31] will retry after 2.349662063s: waiting for machine to come up
	I1014 15:01:44.083975   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:44.084492   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:44.084523   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:44.084437   73405 retry.go:31] will retry after 3.472214726s: waiting for machine to come up
	I1014 15:01:46.066505   72173 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 15:01:46.092975   72173 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 15:01:46.123873   72173 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 15:01:46.142575   72173 system_pods.go:59] 8 kube-system pods found
	I1014 15:01:46.142636   72173 system_pods.go:61] "coredns-7c65d6cfc9-r8x9s" [5a00095c-8777-412a-a7af-319a03d6153e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 15:01:46.142647   72173 system_pods.go:61] "etcd-embed-certs-989166" [981d2f54-f128-4527-a7cb-a6b9c647740b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 15:01:46.142658   72173 system_pods.go:61] "kube-apiserver-embed-certs-989166" [31780b5a-6ebf-4c75-bd27-64a95193827f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 15:01:46.142668   72173 system_pods.go:61] "kube-controller-manager-embed-certs-989166" [345e7656-579a-4be9-bcf0-4117880a2988] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 15:01:46.142678   72173 system_pods.go:61] "kube-proxy-7p84k" [5d8243a8-7247-490f-9102-61008a614a67] Running
	I1014 15:01:46.142685   72173 system_pods.go:61] "kube-scheduler-embed-certs-989166" [53b4b4a4-74ec-485e-99e3-b53c2edc80ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 15:01:46.142695   72173 system_pods.go:61] "metrics-server-6867b74b74-zc8zh" [5abf22c7-d271-4c3a-8e0e-cd867142cee1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:01:46.142703   72173 system_pods.go:61] "storage-provisioner" [6860efa4-c72f-477f-b9e1-e90ddcd112b5] Running
	I1014 15:01:46.142711   72173 system_pods.go:74] duration metric: took 18.811157ms to wait for pod list to return data ...
	I1014 15:01:46.142722   72173 node_conditions.go:102] verifying NodePressure condition ...
	I1014 15:01:46.154420   72173 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 15:01:46.154449   72173 node_conditions.go:123] node cpu capacity is 2
	I1014 15:01:46.154463   72173 node_conditions.go:105] duration metric: took 11.735142ms to run NodePressure ...
	I1014 15:01:46.154483   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:46.417106   72173 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1014 15:01:46.422102   72173 kubeadm.go:739] kubelet initialised
	I1014 15:01:46.422127   72173 kubeadm.go:740] duration metric: took 4.991248ms waiting for restarted kubelet to initialise ...
	I1014 15:01:46.422135   72173 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:01:46.428014   72173 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-r8x9s" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:46.432946   72173 pod_ready.go:98] node "embed-certs-989166" hosting pod "coredns-7c65d6cfc9-r8x9s" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.432965   72173 pod_ready.go:82] duration metric: took 4.927935ms for pod "coredns-7c65d6cfc9-r8x9s" in "kube-system" namespace to be "Ready" ...
	E1014 15:01:46.432972   72173 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-989166" hosting pod "coredns-7c65d6cfc9-r8x9s" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.432979   72173 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:46.441849   72173 pod_ready.go:98] node "embed-certs-989166" hosting pod "etcd-embed-certs-989166" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.441868   72173 pod_ready.go:82] duration metric: took 8.882863ms for pod "etcd-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	E1014 15:01:46.441877   72173 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-989166" hosting pod "etcd-embed-certs-989166" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.441883   72173 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:46.446863   72173 pod_ready.go:98] node "embed-certs-989166" hosting pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.446891   72173 pod_ready.go:82] duration metric: took 4.997658ms for pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	E1014 15:01:46.446912   72173 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-989166" hosting pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.446922   72173 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:46.526949   72173 pod_ready.go:98] node "embed-certs-989166" hosting pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.526972   72173 pod_ready.go:82] duration metric: took 80.035898ms for pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	E1014 15:01:46.526981   72173 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-989166" hosting pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.526987   72173 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-7p84k" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:46.927217   72173 pod_ready.go:93] pod "kube-proxy-7p84k" in "kube-system" namespace has status "Ready":"True"
	I1014 15:01:46.927249   72173 pod_ready.go:82] duration metric: took 400.252417ms for pod "kube-proxy-7p84k" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:46.927263   72173 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:48.933034   72173 pod_ready.go:103] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"False"
	I1014 15:01:47.558671   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:47.559112   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:47.559143   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:47.559067   73405 retry.go:31] will retry after 3.421253013s: waiting for machine to come up
	I1014 15:01:50.981602   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:50.982143   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has current primary IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:50.982167   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Found IP for machine: 192.168.50.128
	I1014 15:01:50.982186   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Reserving static IP address...
	I1014 15:01:50.982682   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-201291", mac: "52:54:00:23:03:c4", ip: "192.168.50.128"} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:50.982703   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Reserved static IP address: 192.168.50.128
	I1014 15:01:50.982722   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | skip adding static IP to network mk-default-k8s-diff-port-201291 - found existing host DHCP lease matching {name: "default-k8s-diff-port-201291", mac: "52:54:00:23:03:c4", ip: "192.168.50.128"}
	I1014 15:01:50.982743   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Getting to WaitForSSH function...
	I1014 15:01:50.982781   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for SSH to be available...
	I1014 15:01:50.985084   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:50.985609   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:50.985640   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:50.985750   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Using SSH client type: external
	I1014 15:01:50.985778   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa (-rw-------)
	I1014 15:01:50.985814   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.128 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 15:01:50.985832   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | About to run SSH command:
	I1014 15:01:50.985849   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | exit 0
	I1014 15:01:51.123927   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | SSH cmd err, output: <nil>: 
	I1014 15:01:51.124457   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetConfigRaw
	I1014 15:01:51.125106   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetIP
	I1014 15:01:51.128286   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.128716   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.128770   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.129045   72390 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/config.json ...
	I1014 15:01:51.129283   72390 machine.go:93] provisionDockerMachine start ...
	I1014 15:01:51.129308   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:51.129551   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.131756   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.132164   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.132207   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.132488   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:51.132701   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.132873   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.133022   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:51.133181   72390 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:51.133421   72390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I1014 15:01:51.133436   72390 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 15:01:51.244659   72390 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 15:01:51.244691   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetMachineName
	I1014 15:01:51.244923   72390 buildroot.go:166] provisioning hostname "default-k8s-diff-port-201291"
	I1014 15:01:51.244953   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetMachineName
	I1014 15:01:51.245149   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.248061   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.248429   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.248463   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.248521   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:51.248697   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.248887   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.249034   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:51.249227   72390 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:51.249448   72390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I1014 15:01:51.249463   72390 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-201291 && echo "default-k8s-diff-port-201291" | sudo tee /etc/hostname
	I1014 15:01:51.373260   72390 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-201291
	
	I1014 15:01:51.373293   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.376195   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.376528   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.376549   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.376752   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:51.376962   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.377159   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.377296   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:51.377446   72390 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:51.377657   72390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I1014 15:01:51.377676   72390 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-201291' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-201291/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-201291' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 15:01:52.179441   72639 start.go:364] duration metric: took 3m34.072351032s to acquireMachinesLock for "old-k8s-version-399767"
	I1014 15:01:52.179497   72639 start.go:96] Skipping create...Using existing machine configuration
	I1014 15:01:52.179505   72639 fix.go:54] fixHost starting: 
	I1014 15:01:52.179834   72639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:01:52.179873   72639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:01:52.196724   72639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39389
	I1014 15:01:52.197171   72639 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:01:52.197649   72639 main.go:141] libmachine: Using API Version  1
	I1014 15:01:52.197673   72639 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:01:52.198010   72639 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:01:52.198191   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:01:52.198337   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetState
	I1014 15:01:52.199789   72639 fix.go:112] recreateIfNeeded on old-k8s-version-399767: state=Stopped err=<nil>
	I1014 15:01:52.199826   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	W1014 15:01:52.199998   72639 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 15:01:52.202220   72639 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-399767" ...
	I1014 15:01:52.203601   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .Start
	I1014 15:01:52.203771   72639 main.go:141] libmachine: (old-k8s-version-399767) Ensuring networks are active...
	I1014 15:01:52.204575   72639 main.go:141] libmachine: (old-k8s-version-399767) Ensuring network default is active
	I1014 15:01:52.204971   72639 main.go:141] libmachine: (old-k8s-version-399767) Ensuring network mk-old-k8s-version-399767 is active
	I1014 15:01:52.205326   72639 main.go:141] libmachine: (old-k8s-version-399767) Getting domain xml...
	I1014 15:01:52.206026   72639 main.go:141] libmachine: (old-k8s-version-399767) Creating domain...
	I1014 15:01:51.488446   72390 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 15:01:51.488486   72390 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 15:01:51.488535   72390 buildroot.go:174] setting up certificates
	I1014 15:01:51.488553   72390 provision.go:84] configureAuth start
	I1014 15:01:51.488570   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetMachineName
	I1014 15:01:51.488867   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetIP
	I1014 15:01:51.491749   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.492141   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.492171   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.492351   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.494197   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.494498   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.494524   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.494693   72390 provision.go:143] copyHostCerts
	I1014 15:01:51.494745   72390 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 15:01:51.494764   72390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 15:01:51.494834   72390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 15:01:51.494945   72390 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 15:01:51.494958   72390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 15:01:51.494992   72390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 15:01:51.495081   72390 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 15:01:51.495095   72390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 15:01:51.495122   72390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 15:01:51.495214   72390 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-201291 san=[127.0.0.1 192.168.50.128 default-k8s-diff-port-201291 localhost minikube]
	I1014 15:01:51.567041   72390 provision.go:177] copyRemoteCerts
	I1014 15:01:51.567098   72390 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 15:01:51.567121   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.570006   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.570340   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.570368   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.570562   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:51.570769   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.570941   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:51.571047   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:01:51.652956   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 15:01:51.677959   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1014 15:01:51.702009   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 15:01:51.727016   72390 provision.go:87] duration metric: took 238.449189ms to configureAuth
	I1014 15:01:51.727043   72390 buildroot.go:189] setting minikube options for container-runtime
	I1014 15:01:51.727207   72390 config.go:182] Loaded profile config "default-k8s-diff-port-201291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 15:01:51.727276   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.729742   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.730043   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.730065   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.730242   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:51.730418   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.730578   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.730735   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:51.730891   72390 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:51.731097   72390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I1014 15:01:51.731114   72390 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 15:01:51.942847   72390 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 15:01:51.942874   72390 machine.go:96] duration metric: took 813.575194ms to provisionDockerMachine
	I1014 15:01:51.942888   72390 start.go:293] postStartSetup for "default-k8s-diff-port-201291" (driver="kvm2")
	I1014 15:01:51.942903   72390 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 15:01:51.942926   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:51.943250   72390 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 15:01:51.943283   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.946246   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.946608   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.946638   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.946799   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:51.946984   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.947165   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:51.947293   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:01:52.030124   72390 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 15:01:52.034493   72390 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 15:01:52.034525   72390 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 15:01:52.034625   72390 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 15:01:52.034740   72390 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 15:01:52.034834   72390 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 15:01:52.044919   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:01:52.068326   72390 start.go:296] duration metric: took 125.426221ms for postStartSetup
	I1014 15:01:52.068370   72390 fix.go:56] duration metric: took 19.832650283s for fixHost
	I1014 15:01:52.068394   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:52.070949   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.071362   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:52.071388   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.071588   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:52.071788   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:52.071908   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:52.072065   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:52.072231   72390 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:52.072449   72390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I1014 15:01:52.072468   72390 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 15:01:52.179264   72390 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728918112.149610573
	
	I1014 15:01:52.179291   72390 fix.go:216] guest clock: 1728918112.149610573
	I1014 15:01:52.179301   72390 fix.go:229] Guest: 2024-10-14 15:01:52.149610573 +0000 UTC Remote: 2024-10-14 15:01:52.06837553 +0000 UTC m=+235.685992564 (delta=81.235043ms)
	I1014 15:01:52.179349   72390 fix.go:200] guest clock delta is within tolerance: 81.235043ms
	I1014 15:01:52.179354   72390 start.go:83] releasing machines lock for "default-k8s-diff-port-201291", held for 19.943664398s
	I1014 15:01:52.179387   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:52.179666   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetIP
	I1014 15:01:52.182457   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.182834   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:52.182861   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.183000   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:52.183598   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:52.183784   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:52.183883   72390 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 15:01:52.183928   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:52.183993   72390 ssh_runner.go:195] Run: cat /version.json
	I1014 15:01:52.184017   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:52.186499   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.186692   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.186890   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:52.186915   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.187021   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:52.187050   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.187086   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:52.187288   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:52.187331   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:52.187479   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:52.187485   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:52.187597   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:01:52.187688   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:52.187843   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:01:52.264102   72390 ssh_runner.go:195] Run: systemctl --version
	I1014 15:01:52.291233   72390 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 15:01:52.443318   72390 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 15:01:52.450321   72390 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 15:01:52.450400   72390 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 15:01:52.467949   72390 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 15:01:52.467975   72390 start.go:495] detecting cgroup driver to use...
	I1014 15:01:52.468039   72390 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 15:01:52.485758   72390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 15:01:52.500662   72390 docker.go:217] disabling cri-docker service (if available) ...
	I1014 15:01:52.500729   72390 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 15:01:52.520846   72390 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 15:01:52.535606   72390 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 15:01:52.671062   72390 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 15:01:52.845631   72390 docker.go:233] disabling docker service ...
	I1014 15:01:52.845694   72390 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 15:01:52.867403   72390 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 15:01:52.882344   72390 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 15:01:53.020570   72390 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 15:01:53.157941   72390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 15:01:53.174989   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 15:01:53.195729   72390 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 15:01:53.195799   72390 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.207613   72390 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 15:01:53.207671   72390 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.218838   72390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.231186   72390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.247521   72390 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 15:01:53.258128   72390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.269119   72390 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.287810   72390 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.298576   72390 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 15:01:53.308114   72390 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 15:01:53.308169   72390 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 15:01:53.322207   72390 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 15:01:53.332284   72390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:01:53.483702   72390 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 15:01:53.581260   72390 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 15:01:53.581341   72390 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 15:01:53.586042   72390 start.go:563] Will wait 60s for crictl version
	I1014 15:01:53.586105   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:01:53.589931   72390 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 15:01:53.634776   72390 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 15:01:53.634864   72390 ssh_runner.go:195] Run: crio --version
	I1014 15:01:53.664242   72390 ssh_runner.go:195] Run: crio --version
	I1014 15:01:53.698374   72390 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1014 15:01:50.933590   72173 pod_ready.go:103] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"False"
	I1014 15:01:52.935445   72173 pod_ready.go:103] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"False"
	I1014 15:01:53.699730   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetIP
	I1014 15:01:53.702837   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:53.703224   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:53.703245   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:53.703528   72390 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1014 15:01:53.707720   72390 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:01:53.721953   72390 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-201291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-201291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.128 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 15:01:53.722106   72390 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 15:01:53.722165   72390 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:01:53.779083   72390 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1014 15:01:53.779139   72390 ssh_runner.go:195] Run: which lz4
	I1014 15:01:53.783197   72390 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 15:01:53.787515   72390 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 15:01:53.787549   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1014 15:01:55.277150   72390 crio.go:462] duration metric: took 1.493980352s to copy over tarball
	I1014 15:01:55.277212   72390 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 15:01:53.506315   72639 main.go:141] libmachine: (old-k8s-version-399767) Waiting to get IP...
	I1014 15:01:53.507576   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:53.508228   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:53.508297   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:53.508202   73581 retry.go:31] will retry after 220.59125ms: waiting for machine to come up
	I1014 15:01:53.730853   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:53.731286   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:53.731339   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:53.731257   73581 retry.go:31] will retry after 321.559387ms: waiting for machine to come up
	I1014 15:01:54.054891   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:54.055482   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:54.055509   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:54.055443   73581 retry.go:31] will retry after 444.912998ms: waiting for machine to come up
	I1014 15:01:54.502125   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:54.502479   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:54.502525   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:54.502462   73581 retry.go:31] will retry after 600.214254ms: waiting for machine to come up
	I1014 15:01:55.104962   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:55.105479   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:55.105504   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:55.105425   73581 retry.go:31] will retry after 686.77698ms: waiting for machine to come up
	I1014 15:01:55.794125   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:55.794825   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:55.794871   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:55.794717   73581 retry.go:31] will retry after 926.146146ms: waiting for machine to come up
	I1014 15:01:56.722712   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:56.723153   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:56.723183   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:56.723112   73581 retry.go:31] will retry after 1.108272037s: waiting for machine to come up
	I1014 15:01:57.832729   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:57.833304   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:57.833356   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:57.833279   73581 retry.go:31] will retry after 1.442737664s: waiting for machine to come up
	I1014 15:01:55.435691   72173 pod_ready.go:103] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"False"
	I1014 15:01:57.933561   72173 pod_ready.go:103] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"False"
	I1014 15:01:57.424526   72390 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.147277316s)
	I1014 15:01:57.424559   72390 crio.go:469] duration metric: took 2.147385522s to extract the tarball
	I1014 15:01:57.424566   72390 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 15:01:57.461792   72390 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:01:57.504424   72390 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 15:01:57.504450   72390 cache_images.go:84] Images are preloaded, skipping loading
	I1014 15:01:57.504460   72390 kubeadm.go:934] updating node { 192.168.50.128 8444 v1.31.1 crio true true} ...
	I1014 15:01:57.504656   72390 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-201291 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-201291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 15:01:57.504759   72390 ssh_runner.go:195] Run: crio config
	I1014 15:01:57.555431   72390 cni.go:84] Creating CNI manager for ""
	I1014 15:01:57.555453   72390 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:01:57.555462   72390 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 15:01:57.555482   72390 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.128 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-201291 NodeName:default-k8s-diff-port-201291 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.128"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.128 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 15:01:57.555593   72390 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.128
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-201291"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.128"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.128"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 15:01:57.555652   72390 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 15:01:57.565953   72390 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 15:01:57.566025   72390 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 15:01:57.576141   72390 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1014 15:01:57.594855   72390 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 15:01:57.611249   72390 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1014 15:01:57.628363   72390 ssh_runner.go:195] Run: grep 192.168.50.128	control-plane.minikube.internal$ /etc/hosts
	I1014 15:01:57.632552   72390 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.128	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:01:57.645588   72390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:01:57.769192   72390 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:01:57.787654   72390 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291 for IP: 192.168.50.128
	I1014 15:01:57.787677   72390 certs.go:194] generating shared ca certs ...
	I1014 15:01:57.787695   72390 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:01:57.787865   72390 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 15:01:57.787916   72390 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 15:01:57.787930   72390 certs.go:256] generating profile certs ...
	I1014 15:01:57.788084   72390 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/client.key
	I1014 15:01:57.788174   72390 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/apiserver.key.517dfce8
	I1014 15:01:57.788223   72390 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/proxy-client.key
	I1014 15:01:57.788371   72390 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 15:01:57.788407   72390 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 15:01:57.788417   72390 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 15:01:57.788439   72390 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 15:01:57.788460   72390 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 15:01:57.788482   72390 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 15:01:57.788521   72390 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:01:57.789141   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 15:01:57.821159   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 15:01:57.875530   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 15:01:57.902687   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 15:01:57.935658   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1014 15:01:57.961987   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 15:01:57.987107   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 15:01:58.013544   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 15:01:58.039793   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 15:01:58.071154   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 15:01:58.102574   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 15:01:58.127398   72390 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 15:01:58.144906   72390 ssh_runner.go:195] Run: openssl version
	I1014 15:01:58.150817   72390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 15:01:58.162122   72390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 15:01:58.167170   72390 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 15:01:58.167240   72390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 15:01:58.173692   72390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 15:01:58.185769   72390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 15:01:58.197045   72390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:01:58.201652   72390 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:01:58.201716   72390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:01:58.207559   72390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 15:01:58.218921   72390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 15:01:58.230822   72390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 15:01:58.235774   72390 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 15:01:58.235832   72390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 15:01:58.241546   72390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 15:01:58.252618   72390 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 15:01:58.257509   72390 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 15:01:58.263891   72390 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 15:01:58.270085   72390 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 15:01:58.276427   72390 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 15:01:58.282346   72390 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 15:01:58.288396   72390 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 15:01:58.294386   72390 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-201291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-201291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.128 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 15:01:58.294472   72390 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 15:01:58.294517   72390 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:01:58.342008   72390 cri.go:89] found id: ""
	I1014 15:01:58.342088   72390 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 15:01:58.352478   72390 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1014 15:01:58.352512   72390 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1014 15:01:58.352566   72390 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 15:01:58.363158   72390 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 15:01:58.364106   72390 kubeconfig.go:125] found "default-k8s-diff-port-201291" server: "https://192.168.50.128:8444"
	I1014 15:01:58.366079   72390 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 15:01:58.375635   72390 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.128
	I1014 15:01:58.375666   72390 kubeadm.go:1160] stopping kube-system containers ...
	I1014 15:01:58.375680   72390 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1014 15:01:58.375733   72390 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:01:58.411846   72390 cri.go:89] found id: ""
	I1014 15:01:58.411923   72390 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1014 15:01:58.428602   72390 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:01:58.439214   72390 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:01:58.439239   72390 kubeadm.go:157] found existing configuration files:
	
	I1014 15:01:58.439293   72390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1014 15:01:58.448475   72390 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:01:58.448528   72390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:01:58.457816   72390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1014 15:01:58.467279   72390 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:01:58.467352   72390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:01:58.477479   72390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1014 15:01:58.487899   72390 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:01:58.487968   72390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:01:58.498296   72390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1014 15:01:58.507910   72390 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:01:58.507977   72390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:01:58.517901   72390 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:01:58.527983   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:58.654226   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:59.576099   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:59.790552   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:59.879043   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:59.963369   72390 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:01:59.963462   72390 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:00.464403   72390 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:00.963891   72390 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:00.994849   72390 api_server.go:72] duration metric: took 1.031477803s to wait for apiserver process to appear ...
	I1014 15:02:00.994875   72390 api_server.go:88] waiting for apiserver healthz status ...
	I1014 15:02:00.994897   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:01:59.278031   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:59.278558   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:59.278586   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:59.278519   73581 retry.go:31] will retry after 1.187069828s: waiting for machine to come up
	I1014 15:02:00.467810   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:00.468237   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:02:00.468267   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:02:00.468195   73581 retry.go:31] will retry after 1.667312665s: waiting for machine to come up
	I1014 15:02:02.137067   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:02.137569   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:02:02.137590   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:02:02.137530   73581 retry.go:31] will retry after 1.910892221s: waiting for machine to come up
	I1014 15:01:59.994818   72173 pod_ready.go:103] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:00.130085   72173 pod_ready.go:93] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:00.130109   72173 pod_ready.go:82] duration metric: took 13.202838085s for pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:00.130121   72173 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:02.142821   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:03.649728   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 15:02:03.649764   72390 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 15:02:03.649780   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:02:03.754772   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:03.754805   72390 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:02:03.995106   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:02:04.020015   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:04.020040   72390 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:02:04.495270   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:02:04.501643   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:04.501694   72390 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:02:04.995049   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:02:05.002865   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:05.002893   72390 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:02:05.495412   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:02:05.499936   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 200:
	ok
	I1014 15:02:05.506656   72390 api_server.go:141] control plane version: v1.31.1
	I1014 15:02:05.506685   72390 api_server.go:131] duration metric: took 4.511803211s to wait for apiserver health ...
	I1014 15:02:05.506694   72390 cni.go:84] Creating CNI manager for ""
	I1014 15:02:05.506700   72390 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:02:05.508420   72390 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 15:02:05.509685   72390 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 15:02:05.521314   72390 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 15:02:05.543021   72390 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 15:02:05.553508   72390 system_pods.go:59] 8 kube-system pods found
	I1014 15:02:05.553539   72390 system_pods.go:61] "coredns-7c65d6cfc9-994hx" [b0291ce4-5503-4bb1-8e36-d956b115c3ac] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 15:02:05.553548   72390 system_pods.go:61] "etcd-default-k8s-diff-port-201291" [5e359915-fb2e-46d5-a1a8-826341943fc3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 15:02:05.553555   72390 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-201291" [047bd813-aaab-428e-ab47-12932195c91f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 15:02:05.553562   72390 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-201291" [6eb0eb91-21ce-4e56-9758-fbd453b0d4df] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 15:02:05.553567   72390 system_pods.go:61] "kube-proxy-rh82t" [1dcd3c39-1bfe-40ac-a012-ea17ea1dfb6d] Running
	I1014 15:02:05.553572   72390 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-201291" [aaeefd23-6adc-4c69-acca-38e3f3172b2e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 15:02:05.553577   72390 system_pods.go:61] "metrics-server-6867b74b74-bcrqs" [508697cd-cf31-4078-8985-5c0b77966695] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:02:05.553581   72390 system_pods.go:61] "storage-provisioner" [62925b5e-ec1d-4d5b-aa70-a4fc555db52d] Running
	I1014 15:02:05.553587   72390 system_pods.go:74] duration metric: took 10.544168ms to wait for pod list to return data ...
	I1014 15:02:05.553593   72390 node_conditions.go:102] verifying NodePressure condition ...
	I1014 15:02:05.558889   72390 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 15:02:05.558917   72390 node_conditions.go:123] node cpu capacity is 2
	I1014 15:02:05.558929   72390 node_conditions.go:105] duration metric: took 5.331009ms to run NodePressure ...
	I1014 15:02:05.558948   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:05.819037   72390 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1014 15:02:05.826431   72390 kubeadm.go:739] kubelet initialised
	I1014 15:02:05.826456   72390 kubeadm.go:740] duration metric: took 7.391664ms waiting for restarted kubelet to initialise ...
	I1014 15:02:05.826463   72390 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:02:05.833547   72390 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:05.840150   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.840175   72390 pod_ready.go:82] duration metric: took 6.599969ms for pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:05.840186   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.840205   72390 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:05.850319   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.850346   72390 pod_ready.go:82] duration metric: took 10.130163ms for pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:05.850359   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.850368   72390 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:05.857192   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.857215   72390 pod_ready.go:82] duration metric: took 6.838793ms for pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:05.857228   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.857237   72390 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:05.946611   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.946646   72390 pod_ready.go:82] duration metric: took 89.397304ms for pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:05.946663   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.946674   72390 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rh82t" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:06.346368   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "kube-proxy-rh82t" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:06.346400   72390 pod_ready.go:82] duration metric: took 399.71513ms for pod "kube-proxy-rh82t" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:06.346413   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "kube-proxy-rh82t" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:06.346423   72390 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:06.746899   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:06.746928   72390 pod_ready.go:82] duration metric: took 400.494872ms for pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:06.746941   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:06.746951   72390 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:07.146147   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:07.146175   72390 pod_ready.go:82] duration metric: took 399.215075ms for pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:07.146199   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:07.146215   72390 pod_ready.go:39] duration metric: took 1.319742206s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:02:07.146237   72390 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 15:02:07.158049   72390 ops.go:34] apiserver oom_adj: -16
	I1014 15:02:07.158072   72390 kubeadm.go:597] duration metric: took 8.805549392s to restartPrimaryControlPlane
	I1014 15:02:07.158082   72390 kubeadm.go:394] duration metric: took 8.863707122s to StartCluster
	I1014 15:02:07.158102   72390 settings.go:142] acquiring lock: {Name:mk9f6f6b9dc8c3435472077cbb9091b8e648d1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:02:07.158192   72390 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 15:02:07.159622   72390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/kubeconfig: {Name:mk17dd47b52fd7a8ee17f563c35b22ef1b7788f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:02:07.159917   72390 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.128 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 15:02:07.159968   72390 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 15:02:07.160052   72390 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-201291"
	I1014 15:02:07.160074   72390 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-201291"
	W1014 15:02:07.160086   72390 addons.go:243] addon storage-provisioner should already be in state true
	I1014 15:02:07.160125   72390 host.go:66] Checking if "default-k8s-diff-port-201291" exists ...
	I1014 15:02:07.160133   72390 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-201291"
	I1014 15:02:07.160166   72390 config.go:182] Loaded profile config "default-k8s-diff-port-201291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 15:02:07.160181   72390 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-201291"
	I1014 15:02:07.160179   72390 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-201291"
	I1014 15:02:07.160228   72390 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-201291"
	W1014 15:02:07.160251   72390 addons.go:243] addon metrics-server should already be in state true
	I1014 15:02:07.160312   72390 host.go:66] Checking if "default-k8s-diff-port-201291" exists ...
	I1014 15:02:07.160472   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.160508   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.160692   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.160712   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.160729   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.160770   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.161892   72390 out.go:177] * Verifying Kubernetes components...
	I1014 15:02:07.163368   72390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:02:07.176101   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36801
	I1014 15:02:07.176351   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44737
	I1014 15:02:07.176705   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.176834   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.177272   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.177298   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.177392   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.177413   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.177600   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43091
	I1014 15:02:07.177639   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.177703   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.178070   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.178181   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.178244   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.178252   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.178285   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.178566   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.178590   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.178944   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.179107   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetState
	I1014 15:02:07.181971   72390 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-201291"
	W1014 15:02:07.181989   72390 addons.go:243] addon default-storageclass should already be in state true
	I1014 15:02:07.182024   72390 host.go:66] Checking if "default-k8s-diff-port-201291" exists ...
	I1014 15:02:07.182278   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.182322   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.194707   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36383
	I1014 15:02:07.195401   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.196015   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.196043   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.196413   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.196511   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35479
	I1014 15:02:07.196618   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetState
	I1014 15:02:07.196977   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.197479   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.197497   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.197520   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41695
	I1014 15:02:07.197848   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.197981   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.198048   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetState
	I1014 15:02:07.198544   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.198567   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.198636   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:02:07.199017   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.199817   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.199824   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:02:07.199864   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.200860   72390 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:07.201674   72390 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1014 15:02:04.050521   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:04.051060   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:02:04.051099   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:02:04.051015   73581 retry.go:31] will retry after 2.29433775s: waiting for machine to come up
	I1014 15:02:06.347519   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:06.347985   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:02:06.348004   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:02:06.347945   73581 retry.go:31] will retry after 3.499922823s: waiting for machine to come up
	I1014 15:02:07.202461   72390 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 15:02:07.202476   72390 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 15:02:07.202491   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:02:07.203259   72390 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1014 15:02:07.203275   72390 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1014 15:02:07.203292   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:02:07.205760   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:02:07.206124   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:02:07.206150   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:02:07.206375   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:02:07.206533   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:02:07.206676   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:02:07.206729   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:02:07.206858   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:02:07.207134   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:02:07.207150   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:02:07.207248   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:02:07.207455   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:02:07.207559   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:02:07.207677   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:02:07.219554   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38833
	I1014 15:02:07.220070   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.220483   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.220508   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.220842   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.221004   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetState
	I1014 15:02:07.222706   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:02:07.222961   72390 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 15:02:07.222979   72390 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 15:02:07.222997   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:02:07.225715   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:02:07.226209   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:02:07.226250   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:02:07.226551   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:02:07.226964   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:02:07.227118   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:02:07.227254   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:02:07.362105   72390 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:02:07.384279   72390 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-201291" to be "Ready" ...
	I1014 15:02:07.438536   72390 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 15:02:07.551868   72390 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1014 15:02:07.551897   72390 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1014 15:02:07.606347   72390 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 15:02:07.656287   72390 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1014 15:02:07.656313   72390 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1014 15:02:07.687002   72390 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 15:02:07.687027   72390 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1014 15:02:07.751715   72390 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 15:02:07.810869   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:07.810902   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:07.811193   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Closing plugin on server side
	I1014 15:02:07.811247   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:07.811262   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:07.811273   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:07.811281   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:07.811546   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:07.811562   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:07.811576   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Closing plugin on server side
	I1014 15:02:07.819897   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:07.819917   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:07.820156   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:07.820206   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:07.820179   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Closing plugin on server side
	I1014 15:02:08.581553   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:08.581583   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:08.581902   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Closing plugin on server side
	I1014 15:02:08.581943   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:08.581955   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:08.581974   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:08.581986   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:08.582197   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:08.582211   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:08.595214   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:08.595242   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:08.595493   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Closing plugin on server side
	I1014 15:02:08.595569   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:08.595589   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:08.595609   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:08.595623   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:08.595833   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:08.595847   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:08.595864   72390 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-201291"
	I1014 15:02:08.597967   72390 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1014 15:02:04.638029   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:07.139428   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:11.248505   71679 start.go:364] duration metric: took 53.170862497s to acquireMachinesLock for "no-preload-813300"
	I1014 15:02:11.248567   71679 start.go:96] Skipping create...Using existing machine configuration
	I1014 15:02:11.248581   71679 fix.go:54] fixHost starting: 
	I1014 15:02:11.248978   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:11.249022   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:11.266270   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39251
	I1014 15:02:11.266780   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:11.267302   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:02:11.267319   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:11.267675   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:11.267842   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:11.267984   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetState
	I1014 15:02:11.269459   71679 fix.go:112] recreateIfNeeded on no-preload-813300: state=Stopped err=<nil>
	I1014 15:02:11.269484   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	W1014 15:02:11.269589   71679 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 15:02:11.271434   71679 out.go:177] * Restarting existing kvm2 VM for "no-preload-813300" ...
	I1014 15:02:08.599138   72390 addons.go:510] duration metric: took 1.439175047s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1014 15:02:09.388573   72390 node_ready.go:53] node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:09.851017   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.851562   72639 main.go:141] libmachine: (old-k8s-version-399767) Found IP for machine: 192.168.72.138
	I1014 15:02:09.851582   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has current primary IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.851587   72639 main.go:141] libmachine: (old-k8s-version-399767) Reserving static IP address...
	I1014 15:02:09.851961   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "old-k8s-version-399767", mac: "52:54:00:87:01:70", ip: "192.168.72.138"} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:09.851991   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | skip adding static IP to network mk-old-k8s-version-399767 - found existing host DHCP lease matching {name: "old-k8s-version-399767", mac: "52:54:00:87:01:70", ip: "192.168.72.138"}
	I1014 15:02:09.852009   72639 main.go:141] libmachine: (old-k8s-version-399767) Reserved static IP address: 192.168.72.138
	I1014 15:02:09.852021   72639 main.go:141] libmachine: (old-k8s-version-399767) Waiting for SSH to be available...
	I1014 15:02:09.852031   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | Getting to WaitForSSH function...
	I1014 15:02:09.854039   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.854351   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:09.854378   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.854493   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | Using SSH client type: external
	I1014 15:02:09.854517   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa (-rw-------)
	I1014 15:02:09.854547   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.138 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 15:02:09.854559   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | About to run SSH command:
	I1014 15:02:09.854572   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | exit 0
	I1014 15:02:09.979174   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | SSH cmd err, output: <nil>: 
	I1014 15:02:09.979594   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetConfigRaw
	I1014 15:02:09.980252   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetIP
	I1014 15:02:09.983038   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.983469   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:09.983502   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.983891   72639 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/config.json ...
	I1014 15:02:09.984191   72639 machine.go:93] provisionDockerMachine start ...
	I1014 15:02:09.984220   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:09.984487   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:09.986947   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.987361   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:09.987389   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.987514   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:09.987682   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:09.987830   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:09.987924   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:09.988076   72639 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:09.988338   72639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1014 15:02:09.988352   72639 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 15:02:10.098944   72639 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 15:02:10.098968   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetMachineName
	I1014 15:02:10.099242   72639 buildroot.go:166] provisioning hostname "old-k8s-version-399767"
	I1014 15:02:10.099268   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetMachineName
	I1014 15:02:10.099437   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:10.101961   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.102298   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.102320   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.102468   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:10.102670   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.102846   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.102980   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:10.103124   72639 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:10.103337   72639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1014 15:02:10.103353   72639 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-399767 && echo "old-k8s-version-399767" | sudo tee /etc/hostname
	I1014 15:02:10.226037   72639 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-399767
	
	I1014 15:02:10.226069   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:10.228712   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.229059   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.229082   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.229228   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:10.229408   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.229549   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.229670   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:10.229804   72639 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:10.230001   72639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1014 15:02:10.230018   72639 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-399767' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-399767/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-399767' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 15:02:10.344175   72639 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 15:02:10.344206   72639 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 15:02:10.344270   72639 buildroot.go:174] setting up certificates
	I1014 15:02:10.344284   72639 provision.go:84] configureAuth start
	I1014 15:02:10.344302   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetMachineName
	I1014 15:02:10.344632   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetIP
	I1014 15:02:10.347200   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.347587   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.347623   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.347812   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:10.349962   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.350332   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.350364   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.350502   72639 provision.go:143] copyHostCerts
	I1014 15:02:10.350558   72639 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 15:02:10.350574   72639 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 15:02:10.350646   72639 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 15:02:10.350734   72639 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 15:02:10.350742   72639 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 15:02:10.350762   72639 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 15:02:10.350812   72639 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 15:02:10.350819   72639 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 15:02:10.350837   72639 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 15:02:10.350887   72639 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-399767 san=[127.0.0.1 192.168.72.138 localhost minikube old-k8s-version-399767]
	I1014 15:02:10.602118   72639 provision.go:177] copyRemoteCerts
	I1014 15:02:10.602175   72639 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 15:02:10.602199   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:10.604519   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.604744   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.604776   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.604946   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:10.605127   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.605273   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:10.605403   72639 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa Username:docker}
	I1014 15:02:10.689081   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 15:02:10.713512   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1014 15:02:10.738086   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 15:02:10.762274   72639 provision.go:87] duration metric: took 417.977128ms to configureAuth
	I1014 15:02:10.762307   72639 buildroot.go:189] setting minikube options for container-runtime
	I1014 15:02:10.762486   72639 config.go:182] Loaded profile config "old-k8s-version-399767": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1014 15:02:10.762552   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:10.765134   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.765442   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.765469   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.765600   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:10.765756   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.765903   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.765998   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:10.766131   72639 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:10.766297   72639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1014 15:02:10.766311   72639 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 15:02:11.011252   72639 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 15:02:11.011279   72639 machine.go:96] duration metric: took 1.027069423s to provisionDockerMachine
	I1014 15:02:11.011292   72639 start.go:293] postStartSetup for "old-k8s-version-399767" (driver="kvm2")
	I1014 15:02:11.011304   72639 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 15:02:11.011349   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:11.011716   72639 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 15:02:11.011751   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:11.014418   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.014754   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:11.014790   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.014946   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:11.015125   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:11.015260   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:11.015376   72639 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa Username:docker}
	I1014 15:02:11.097883   72639 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 15:02:11.102452   72639 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 15:02:11.102481   72639 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 15:02:11.102551   72639 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 15:02:11.102687   72639 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 15:02:11.102781   72639 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 15:02:11.112774   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:02:11.138211   72639 start.go:296] duration metric: took 126.906035ms for postStartSetup
	I1014 15:02:11.138247   72639 fix.go:56] duration metric: took 18.958741429s for fixHost
	I1014 15:02:11.138270   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:11.140740   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.141100   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:11.141139   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.141280   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:11.141484   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:11.141668   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:11.141811   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:11.141974   72639 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:11.142131   72639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1014 15:02:11.142141   72639 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 15:02:11.248330   72639 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728918131.224010283
	
	I1014 15:02:11.248355   72639 fix.go:216] guest clock: 1728918131.224010283
	I1014 15:02:11.248373   72639 fix.go:229] Guest: 2024-10-14 15:02:11.224010283 +0000 UTC Remote: 2024-10-14 15:02:11.138252894 +0000 UTC m=+233.173555624 (delta=85.757389ms)
	I1014 15:02:11.248399   72639 fix.go:200] guest clock delta is within tolerance: 85.757389ms
	I1014 15:02:11.248406   72639 start.go:83] releasing machines lock for "old-k8s-version-399767", held for 19.068928968s
	I1014 15:02:11.248434   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:11.248692   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetIP
	I1014 15:02:11.251774   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.252134   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:11.252176   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.252358   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:11.252840   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:11.253017   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:11.253104   72639 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 15:02:11.253150   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:11.253232   72639 ssh_runner.go:195] Run: cat /version.json
	I1014 15:02:11.253259   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:11.256105   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.256339   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.256504   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:11.256529   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.256662   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:11.256732   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:11.256771   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.256844   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:11.256932   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:11.257003   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:11.257141   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:11.257131   72639 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa Username:docker}
	I1014 15:02:11.257296   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:11.257414   72639 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa Username:docker}
	I1014 15:02:11.363838   72639 ssh_runner.go:195] Run: systemctl --version
	I1014 15:02:11.370414   72639 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 15:02:11.521232   72639 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 15:02:11.527623   72639 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 15:02:11.527712   72639 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 15:02:11.544532   72639 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 15:02:11.544559   72639 start.go:495] detecting cgroup driver to use...
	I1014 15:02:11.544614   72639 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 15:02:11.561693   72639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 15:02:11.576555   72639 docker.go:217] disabling cri-docker service (if available) ...
	I1014 15:02:11.576622   72639 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 15:02:11.593830   72639 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 15:02:11.608785   72639 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 15:02:11.731034   72639 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 15:02:11.909278   72639 docker.go:233] disabling docker service ...
	I1014 15:02:11.909359   72639 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 15:02:11.931218   72639 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 15:02:11.951710   72639 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 15:02:12.103012   72639 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 15:02:12.252290   72639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 15:02:12.270497   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 15:02:12.293240   72639 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1014 15:02:12.293297   72639 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:12.304881   72639 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 15:02:12.304958   72639 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:12.316294   72639 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:12.328591   72639 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:12.340085   72639 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 15:02:12.351765   72639 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 15:02:12.362454   72639 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 15:02:12.362525   72639 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 15:02:12.376865   72639 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 15:02:12.387779   72639 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:02:12.528541   72639 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 15:02:12.635262   72639 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 15:02:12.635335   72639 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 15:02:12.641070   72639 start.go:563] Will wait 60s for crictl version
	I1014 15:02:12.641121   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:12.645111   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 15:02:12.691103   72639 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 15:02:12.691199   72639 ssh_runner.go:195] Run: crio --version
	I1014 15:02:12.720182   72639 ssh_runner.go:195] Run: crio --version
	I1014 15:02:12.754856   72639 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1014 15:02:12.756005   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetIP
	I1014 15:02:12.759369   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:12.759890   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:12.759924   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:12.760164   72639 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1014 15:02:12.765342   72639 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:02:12.782182   72639 kubeadm.go:883] updating cluster {Name:old-k8s-version-399767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-399767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.138 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 15:02:12.782307   72639 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1014 15:02:12.782374   72639 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:02:12.841797   72639 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1014 15:02:12.841871   72639 ssh_runner.go:195] Run: which lz4
	I1014 15:02:12.846193   72639 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 15:02:12.850982   72639 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 15:02:12.851019   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1014 15:02:09.636366   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:11.637804   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:13.638684   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:11.272626   71679 main.go:141] libmachine: (no-preload-813300) Calling .Start
	I1014 15:02:11.272827   71679 main.go:141] libmachine: (no-preload-813300) Ensuring networks are active...
	I1014 15:02:11.273510   71679 main.go:141] libmachine: (no-preload-813300) Ensuring network default is active
	I1014 15:02:11.273954   71679 main.go:141] libmachine: (no-preload-813300) Ensuring network mk-no-preload-813300 is active
	I1014 15:02:11.274410   71679 main.go:141] libmachine: (no-preload-813300) Getting domain xml...
	I1014 15:02:11.275263   71679 main.go:141] libmachine: (no-preload-813300) Creating domain...
	I1014 15:02:12.614590   71679 main.go:141] libmachine: (no-preload-813300) Waiting to get IP...
	I1014 15:02:12.615572   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:12.616018   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:12.616092   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:12.616013   73776 retry.go:31] will retry after 302.312986ms: waiting for machine to come up
	I1014 15:02:12.919678   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:12.920039   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:12.920074   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:12.920005   73776 retry.go:31] will retry after 371.392955ms: waiting for machine to come up
	I1014 15:02:13.292596   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:13.293214   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:13.293244   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:13.293164   73776 retry.go:31] will retry after 299.379251ms: waiting for machine to come up
	I1014 15:02:13.594808   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:13.595344   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:13.595370   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:13.595297   73776 retry.go:31] will retry after 598.480386ms: waiting for machine to come up
	I1014 15:02:14.195149   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:14.195744   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:14.195775   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:14.195696   73776 retry.go:31] will retry after 567.581822ms: waiting for machine to come up
	I1014 15:02:14.764315   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:14.764863   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:14.764886   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:14.764815   73776 retry.go:31] will retry after 587.597591ms: waiting for machine to come up
	I1014 15:02:15.353495   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:15.353948   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:15.353980   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:15.353896   73776 retry.go:31] will retry after 1.024496536s: waiting for machine to come up
	I1014 15:02:11.889135   72390 node_ready.go:53] node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:13.889200   72390 node_ready.go:49] node "default-k8s-diff-port-201291" has status "Ready":"True"
	I1014 15:02:13.889228   72390 node_ready.go:38] duration metric: took 6.504919545s for node "default-k8s-diff-port-201291" to be "Ready" ...
	I1014 15:02:13.889240   72390 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:02:13.898112   72390 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:15.907127   72390 pod_ready.go:103] pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:14.579304   72639 crio.go:462] duration metric: took 1.733147869s to copy over tarball
	I1014 15:02:14.579405   72639 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 15:02:17.644891   72639 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.06545265s)
	I1014 15:02:17.644954   72639 crio.go:469] duration metric: took 3.065620277s to extract the tarball
	I1014 15:02:17.644979   72639 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 15:02:17.688304   72639 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:02:17.727862   72639 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1014 15:02:17.727888   72639 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1014 15:02:17.727984   72639 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:17.727995   72639 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:17.728006   72639 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:17.728036   72639 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:17.727986   72639 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:17.728104   72639 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1014 15:02:17.728169   72639 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1014 15:02:17.728267   72639 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:17.729900   72639 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:17.729941   72639 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:17.729954   72639 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:17.729900   72639 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1014 15:02:17.729984   72639 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:17.729999   72639 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1014 15:02:17.729913   72639 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:17.730335   72639 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:17.889181   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:17.912728   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:17.919124   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:17.920117   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:17.934314   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1014 15:02:17.951143   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:17.956588   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1014 15:02:17.964968   72639 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1014 15:02:17.965031   72639 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:17.965066   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:16.139535   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:18.637888   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:16.379768   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:16.380165   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:16.380236   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:16.380142   73776 retry.go:31] will retry after 1.022289492s: waiting for machine to come up
	I1014 15:02:17.403892   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:17.404406   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:17.404430   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:17.404383   73776 retry.go:31] will retry after 1.277226075s: waiting for machine to come up
	I1014 15:02:18.683704   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:18.684176   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:18.684200   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:18.684126   73776 retry.go:31] will retry after 2.146714263s: waiting for machine to come up
	I1014 15:02:18.406707   72390 pod_ready.go:103] pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:20.412201   72390 pod_ready.go:103] pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:21.406229   72390 pod_ready.go:93] pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:21.406256   72390 pod_ready.go:82] duration metric: took 7.508120497s for pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.406269   72390 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.413868   72390 pod_ready.go:93] pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:21.413896   72390 pod_ready.go:82] duration metric: took 7.618897ms for pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.413910   72390 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:18.041388   72639 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1014 15:02:18.041436   72639 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:18.041489   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.041504   72639 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1014 15:02:18.041540   72639 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:18.041579   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.069534   72639 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1014 15:02:18.069582   72639 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1014 15:02:18.069631   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.069794   72639 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1014 15:02:18.069821   72639 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:18.069852   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.096492   72639 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1014 15:02:18.096536   72639 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:18.096575   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.104764   72639 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1014 15:02:18.104810   72639 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1014 15:02:18.104816   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:18.104854   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.104876   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:18.104885   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:18.104980   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:18.104984   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1014 15:02:18.105025   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:18.119784   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1014 15:02:18.213816   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:18.241644   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:18.288717   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:18.288820   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1014 15:02:18.288931   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:18.289005   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:18.295481   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1014 15:02:18.376936   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:18.393755   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:18.449717   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:18.449798   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:18.449824   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1014 15:02:18.449904   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:18.461905   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1014 15:02:18.508804   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1014 15:02:18.521502   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1014 15:02:18.612103   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1014 15:02:18.613450   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1014 15:02:18.613548   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1014 15:02:18.613625   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1014 15:02:18.613715   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1014 15:02:18.741774   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:18.888495   72639 cache_images.go:92] duration metric: took 1.16058525s to LoadCachedImages
	W1014 15:02:18.888578   72639 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I1014 15:02:18.888594   72639 kubeadm.go:934] updating node { 192.168.72.138 8443 v1.20.0 crio true true} ...
	I1014 15:02:18.888707   72639 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-399767 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.138
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-399767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 15:02:18.888791   72639 ssh_runner.go:195] Run: crio config
	I1014 15:02:18.943058   72639 cni.go:84] Creating CNI manager for ""
	I1014 15:02:18.943082   72639 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:02:18.943091   72639 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 15:02:18.943108   72639 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.138 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-399767 NodeName:old-k8s-version-399767 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.138"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.138 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1014 15:02:18.943225   72639 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.138
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-399767"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.138
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.138"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 15:02:18.943285   72639 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1014 15:02:18.956635   72639 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 15:02:18.956727   72639 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 15:02:18.970846   72639 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1014 15:02:18.992163   72639 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 15:02:19.012061   72639 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1014 15:02:19.033158   72639 ssh_runner.go:195] Run: grep 192.168.72.138	control-plane.minikube.internal$ /etc/hosts
	I1014 15:02:19.037195   72639 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.138	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:02:19.051127   72639 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:02:19.172992   72639 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:02:19.190545   72639 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767 for IP: 192.168.72.138
	I1014 15:02:19.190572   72639 certs.go:194] generating shared ca certs ...
	I1014 15:02:19.190592   72639 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:02:19.190786   72639 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 15:02:19.190843   72639 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 15:02:19.190853   72639 certs.go:256] generating profile certs ...
	I1014 15:02:19.190973   72639 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/client.key
	I1014 15:02:19.191053   72639 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/apiserver.key.c5ef93ea
	I1014 15:02:19.191108   72639 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/proxy-client.key
	I1014 15:02:19.191264   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 15:02:19.191302   72639 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 15:02:19.191314   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 15:02:19.191345   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 15:02:19.191374   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 15:02:19.191423   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 15:02:19.191477   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:02:19.192328   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 15:02:19.248981   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 15:02:19.281262   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 15:02:19.312859   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 15:02:19.351940   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1014 15:02:19.405710   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 15:02:19.441313   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 15:02:19.481774   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 15:02:19.509433   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 15:02:19.537994   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 15:02:19.564460   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 15:02:19.593632   72639 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 15:02:19.614775   72639 ssh_runner.go:195] Run: openssl version
	I1014 15:02:19.623548   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 15:02:19.636680   72639 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:02:19.642225   72639 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:02:19.642286   72639 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:02:19.648609   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 15:02:19.661130   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 15:02:19.672988   72639 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 15:02:19.678119   72639 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 15:02:19.678189   72639 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 15:02:19.684583   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 15:02:19.696685   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 15:02:19.708338   72639 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 15:02:19.713443   72639 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 15:02:19.713502   72639 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 15:02:19.719482   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 15:02:19.731720   72639 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 15:02:19.739006   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 15:02:19.747558   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 15:02:19.756399   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 15:02:19.764987   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 15:02:19.773320   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 15:02:19.781239   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 15:02:19.788638   72639 kubeadm.go:392] StartCluster: {Name:old-k8s-version-399767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-399767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.138 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 15:02:19.788753   72639 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 15:02:19.788810   72639 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:02:19.829586   72639 cri.go:89] found id: ""
	I1014 15:02:19.829641   72639 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 15:02:19.844632   72639 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1014 15:02:19.844654   72639 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1014 15:02:19.844708   72639 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 15:02:19.860547   72639 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 15:02:19.861848   72639 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-399767" does not appear in /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 15:02:19.862755   72639 kubeconfig.go:62] /home/jenkins/minikube-integration/19790-7836/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-399767" cluster setting kubeconfig missing "old-k8s-version-399767" context setting]
	I1014 15:02:19.863757   72639 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/kubeconfig: {Name:mk17dd47b52fd7a8ee17f563c35b22ef1b7788f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:02:19.927447   72639 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 15:02:19.940830   72639 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.138
	I1014 15:02:19.940919   72639 kubeadm.go:1160] stopping kube-system containers ...
	I1014 15:02:19.940947   72639 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1014 15:02:19.941009   72639 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:02:19.983689   72639 cri.go:89] found id: ""
	I1014 15:02:19.983769   72639 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1014 15:02:20.007079   72639 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:02:20.023868   72639 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:02:20.023896   72639 kubeadm.go:157] found existing configuration files:
	
	I1014 15:02:20.023971   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:02:20.038661   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:02:20.038734   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:02:20.054357   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:02:20.068771   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:02:20.068843   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:02:20.081157   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:02:20.095416   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:02:20.095483   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:02:20.109099   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:02:20.120608   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:02:20.120680   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:02:20.133217   72639 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:02:20.145896   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:20.311840   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:21.472918   72639 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.161037865s)
	I1014 15:02:21.472953   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:21.739827   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:21.833423   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:21.931874   72639 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:02:21.931987   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:22.432595   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:22.932784   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:21.138446   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:23.636836   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:20.833532   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:20.833974   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:20.834000   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:20.833930   73776 retry.go:31] will retry after 1.936414638s: waiting for machine to come up
	I1014 15:02:22.771789   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:22.772183   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:22.772206   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:22.772148   73776 retry.go:31] will retry after 2.51581517s: waiting for machine to come up
	I1014 15:02:25.290082   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:25.290491   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:25.290518   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:25.290453   73776 retry.go:31] will retry after 3.279920525s: waiting for machine to come up
	I1014 15:02:21.420355   72390 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:21.420385   72390 pod_ready.go:82] duration metric: took 6.465669ms for pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.420398   72390 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.427723   72390 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:21.427747   72390 pod_ready.go:82] duration metric: took 7.340946ms for pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.427760   72390 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rh82t" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.433500   72390 pod_ready.go:93] pod "kube-proxy-rh82t" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:21.433526   72390 pod_ready.go:82] duration metric: took 5.757064ms for pod "kube-proxy-rh82t" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.433543   72390 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.802632   72390 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:21.802660   72390 pod_ready.go:82] duration metric: took 369.107697ms for pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.802672   72390 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:23.811046   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:26.308105   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:23.432728   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:23.932296   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:24.432079   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:24.932064   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:25.432201   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:25.932119   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:26.432423   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:26.932675   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:27.432633   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:27.932380   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:25.637287   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:28.137136   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:28.572901   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:28.573383   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:28.573421   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:28.573304   73776 retry.go:31] will retry after 5.283390724s: waiting for machine to come up
	I1014 15:02:28.310800   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:30.400310   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:28.432518   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:28.932871   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:29.432350   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:29.932761   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:30.432621   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:30.932873   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:31.432716   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:31.932364   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:32.432747   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:32.933039   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:30.637300   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:33.136858   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:33.858151   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.858626   71679 main.go:141] libmachine: (no-preload-813300) Found IP for machine: 192.168.61.13
	I1014 15:02:33.858660   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has current primary IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.858670   71679 main.go:141] libmachine: (no-preload-813300) Reserving static IP address...
	I1014 15:02:33.859001   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "no-preload-813300", mac: "52:54:00:ab:86:40", ip: "192.168.61.13"} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:33.859022   71679 main.go:141] libmachine: (no-preload-813300) Reserved static IP address: 192.168.61.13
	I1014 15:02:33.859040   71679 main.go:141] libmachine: (no-preload-813300) DBG | skip adding static IP to network mk-no-preload-813300 - found existing host DHCP lease matching {name: "no-preload-813300", mac: "52:54:00:ab:86:40", ip: "192.168.61.13"}
	I1014 15:02:33.859055   71679 main.go:141] libmachine: (no-preload-813300) DBG | Getting to WaitForSSH function...
	I1014 15:02:33.859065   71679 main.go:141] libmachine: (no-preload-813300) Waiting for SSH to be available...
	I1014 15:02:33.860949   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.861245   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:33.861287   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.861398   71679 main.go:141] libmachine: (no-preload-813300) DBG | Using SSH client type: external
	I1014 15:02:33.861424   71679 main.go:141] libmachine: (no-preload-813300) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa (-rw-------)
	I1014 15:02:33.861460   71679 main.go:141] libmachine: (no-preload-813300) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.13 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 15:02:33.861476   71679 main.go:141] libmachine: (no-preload-813300) DBG | About to run SSH command:
	I1014 15:02:33.861488   71679 main.go:141] libmachine: (no-preload-813300) DBG | exit 0
	I1014 15:02:33.991450   71679 main.go:141] libmachine: (no-preload-813300) DBG | SSH cmd err, output: <nil>: 
	I1014 15:02:33.991854   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetConfigRaw
	I1014 15:02:33.992623   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetIP
	I1014 15:02:33.995514   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.995884   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:33.995908   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.996225   71679 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/config.json ...
	I1014 15:02:33.996549   71679 machine.go:93] provisionDockerMachine start ...
	I1014 15:02:33.996572   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:33.996784   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:33.999385   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.999751   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:33.999789   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.999948   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.000135   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.000312   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.000455   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.000648   71679 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:34.000874   71679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.13 22 <nil> <nil>}
	I1014 15:02:34.000890   71679 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 15:02:34.114981   71679 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 15:02:34.115014   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetMachineName
	I1014 15:02:34.115245   71679 buildroot.go:166] provisioning hostname "no-preload-813300"
	I1014 15:02:34.115272   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetMachineName
	I1014 15:02:34.115421   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.117557   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.117890   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.117929   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.118027   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.118210   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.118365   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.118524   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.118720   71679 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:34.118913   71679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.13 22 <nil> <nil>}
	I1014 15:02:34.118932   71679 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-813300 && echo "no-preload-813300" | sudo tee /etc/hostname
	I1014 15:02:34.246092   71679 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-813300
	
	I1014 15:02:34.246149   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.248672   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.249095   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.249122   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.249331   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.249505   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.249687   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.249860   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.250061   71679 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:34.250272   71679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.13 22 <nil> <nil>}
	I1014 15:02:34.250297   71679 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-813300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-813300/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-813300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 15:02:34.373470   71679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 15:02:34.373512   71679 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 15:02:34.373576   71679 buildroot.go:174] setting up certificates
	I1014 15:02:34.373594   71679 provision.go:84] configureAuth start
	I1014 15:02:34.373613   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetMachineName
	I1014 15:02:34.373903   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetIP
	I1014 15:02:34.376697   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.376986   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.377009   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.377137   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.379469   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.379813   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.379838   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.379981   71679 provision.go:143] copyHostCerts
	I1014 15:02:34.380034   71679 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 15:02:34.380050   71679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 15:02:34.380106   71679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 15:02:34.380194   71679 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 15:02:34.380201   71679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 15:02:34.380223   71679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 15:02:34.380282   71679 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 15:02:34.380288   71679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 15:02:34.380305   71679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 15:02:34.380362   71679 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.no-preload-813300 san=[127.0.0.1 192.168.61.13 localhost minikube no-preload-813300]
	I1014 15:02:34.421281   71679 provision.go:177] copyRemoteCerts
	I1014 15:02:34.421331   71679 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 15:02:34.421353   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.423903   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.424219   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.424248   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.424471   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.424665   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.424807   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.424948   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:02:34.512847   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 15:02:34.539814   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1014 15:02:34.568946   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 15:02:34.593444   71679 provision.go:87] duration metric: took 219.83393ms to configureAuth
	I1014 15:02:34.593467   71679 buildroot.go:189] setting minikube options for container-runtime
	I1014 15:02:34.593661   71679 config.go:182] Loaded profile config "no-preload-813300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 15:02:34.593744   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.596317   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.596626   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.596659   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.596819   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.597008   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.597159   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.597295   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.597433   71679 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:34.597611   71679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.13 22 <nil> <nil>}
	I1014 15:02:34.597631   71679 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 15:02:34.837224   71679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 15:02:34.837244   71679 machine.go:96] duration metric: took 840.680679ms to provisionDockerMachine
	I1014 15:02:34.837256   71679 start.go:293] postStartSetup for "no-preload-813300" (driver="kvm2")
	I1014 15:02:34.837265   71679 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 15:02:34.837281   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:34.837593   71679 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 15:02:34.837625   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.840357   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.840677   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.840702   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.840845   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.841025   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.841193   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.841363   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:02:34.930754   71679 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 15:02:34.935428   71679 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 15:02:34.935457   71679 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 15:02:34.935541   71679 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 15:02:34.935659   71679 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 15:02:34.935795   71679 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 15:02:34.946363   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:02:34.973029   71679 start.go:296] duration metric: took 135.76066ms for postStartSetup
	I1014 15:02:34.973074   71679 fix.go:56] duration metric: took 23.72449375s for fixHost
	I1014 15:02:34.973098   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.975897   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.976211   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.976237   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.976487   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.976687   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.976813   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.976923   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.977075   71679 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:34.977294   71679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.13 22 <nil> <nil>}
	I1014 15:02:34.977309   71679 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 15:02:35.091556   71679 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728918155.078304162
	
	I1014 15:02:35.091581   71679 fix.go:216] guest clock: 1728918155.078304162
	I1014 15:02:35.091590   71679 fix.go:229] Guest: 2024-10-14 15:02:35.078304162 +0000 UTC Remote: 2024-10-14 15:02:34.973079478 +0000 UTC m=+359.485826316 (delta=105.224684ms)
	I1014 15:02:35.091610   71679 fix.go:200] guest clock delta is within tolerance: 105.224684ms
	I1014 15:02:35.091616   71679 start.go:83] releasing machines lock for "no-preload-813300", held for 23.843071366s
	I1014 15:02:35.091641   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:35.091899   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetIP
	I1014 15:02:35.094383   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:35.094712   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:35.094733   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:35.094910   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:35.095353   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:35.095534   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:35.095589   71679 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 15:02:35.095658   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:35.095750   71679 ssh_runner.go:195] Run: cat /version.json
	I1014 15:02:35.095773   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:35.098288   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:35.098316   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:35.098680   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:35.098713   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:35.098743   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:35.098795   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:35.098835   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:35.099003   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:35.099186   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:35.099198   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:35.099367   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:35.099371   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:02:35.099513   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:35.099728   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:02:35.179961   71679 ssh_runner.go:195] Run: systemctl --version
	I1014 15:02:35.205523   71679 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 15:02:35.350662   71679 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 15:02:35.356870   71679 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 15:02:35.356941   71679 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 15:02:35.374967   71679 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 15:02:35.374997   71679 start.go:495] detecting cgroup driver to use...
	I1014 15:02:35.375067   71679 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 15:02:35.393194   71679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 15:02:35.408295   71679 docker.go:217] disabling cri-docker service (if available) ...
	I1014 15:02:35.408362   71679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 15:02:35.423927   71679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 15:02:35.438753   71679 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 15:02:32.809221   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:34.811962   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:35.567539   71679 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 15:02:35.702830   71679 docker.go:233] disabling docker service ...
	I1014 15:02:35.702916   71679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 15:02:35.720822   71679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 15:02:35.735403   71679 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 15:02:35.880532   71679 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 15:02:36.003343   71679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 15:02:36.018230   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 15:02:36.037065   71679 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 15:02:36.037134   71679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.047820   71679 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 15:02:36.047880   71679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.058531   71679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.069760   71679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.081047   71679 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 15:02:36.092384   71679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.103241   71679 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.121771   71679 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.132886   71679 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 15:02:36.143239   71679 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 15:02:36.143308   71679 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 15:02:36.156582   71679 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 15:02:36.165955   71679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:02:36.283857   71679 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 15:02:36.388165   71679 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 15:02:36.388243   71679 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 15:02:36.393324   71679 start.go:563] Will wait 60s for crictl version
	I1014 15:02:36.393378   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:36.397236   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 15:02:36.444749   71679 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 15:02:36.444839   71679 ssh_runner.go:195] Run: crio --version
	I1014 15:02:36.474831   71679 ssh_runner.go:195] Run: crio --version
	I1014 15:02:36.520531   71679 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1014 15:02:33.432474   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:33.932719   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:34.432581   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:34.932863   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:35.432886   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:35.932915   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:36.432852   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:36.932367   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:37.432894   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:37.933035   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:35.637235   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:38.137613   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:36.521865   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetIP
	I1014 15:02:36.524566   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:36.524956   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:36.524984   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:36.525213   71679 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1014 15:02:36.529579   71679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:02:36.542554   71679 kubeadm.go:883] updating cluster {Name:no-preload-813300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.13 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 15:02:36.542701   71679 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 15:02:36.542737   71679 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:02:36.585681   71679 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1014 15:02:36.585719   71679 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1014 15:02:36.585806   71679 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:36.585838   71679 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:36.585865   71679 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:36.585886   71679 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1014 15:02:36.585925   71679 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:36.585814   71679 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:36.585954   71679 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:36.585843   71679 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:36.587263   71679 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:36.587290   71679 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:36.587289   71679 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:36.587289   71679 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:36.587289   71679 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:36.587326   71679 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:36.587289   71679 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:36.587274   71679 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1014 15:02:36.737070   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:36.750146   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:36.750401   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:36.767605   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1014 15:02:36.775005   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:36.797223   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:36.833657   71679 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I1014 15:02:36.833708   71679 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:36.833754   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:36.833875   71679 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I1014 15:02:36.833896   71679 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:36.833929   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:36.850009   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:36.911675   71679 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1014 15:02:36.911720   71679 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:36.911779   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:36.973319   71679 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1014 15:02:36.973354   71679 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:36.973383   71679 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I1014 15:02:36.973394   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:36.973414   71679 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:36.973453   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:36.973456   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:36.973519   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:36.973619   71679 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I1014 15:02:36.973640   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:36.973644   71679 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:36.973671   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:37.044689   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:37.044739   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:37.044815   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:37.044860   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:37.044907   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:37.044947   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:37.166670   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:37.166737   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:37.166794   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:37.166908   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:37.166924   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:37.272802   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:37.272835   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:37.287078   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I1014 15:02:37.287167   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:37.287207   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1014 15:02:37.287240   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1014 15:02:37.287293   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1014 15:02:37.287320   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1014 15:02:37.287367   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1014 15:02:37.354510   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:37.354621   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1014 15:02:37.354659   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I1014 15:02:37.354676   71679 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1014 15:02:37.354700   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1014 15:02:37.354711   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1014 15:02:37.354719   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1014 15:02:37.354790   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I1014 15:02:37.354812   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I1014 15:02:37.354865   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I1014 15:02:37.532403   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:39.443614   71679 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1: (2.089069189s)
	I1014 15:02:39.443676   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I1014 15:02:39.443766   71679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.089027703s)
	I1014 15:02:39.443790   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I1014 15:02:39.443775   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1014 15:02:39.443813   71679 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1014 15:02:39.443833   71679 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.089105476s)
	I1014 15:02:39.443854   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1014 15:02:39.443861   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1014 15:02:39.443911   71679 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.089031069s)
	I1014 15:02:39.443933   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I1014 15:02:39.443986   71679 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.911557292s)
	I1014 15:02:39.444029   71679 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1014 15:02:39.444057   71679 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:39.444111   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:37.309522   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:39.809526   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:38.432551   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:38.932486   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:39.432591   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:39.932694   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:40.432065   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:40.932044   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:41.432313   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:41.933055   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:42.432453   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:42.932258   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:40.137656   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:42.637462   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:41.514958   71679 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.071133048s)
	I1014 15:02:41.514987   71679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.071109487s)
	I1014 15:02:41.515016   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1014 15:02:41.515041   71679 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1014 15:02:41.515046   71679 ssh_runner.go:235] Completed: which crictl: (2.070916553s)
	I1014 15:02:41.514994   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I1014 15:02:41.515093   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1014 15:02:41.515105   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:41.569878   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:43.401013   71679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.885889648s)
	I1014 15:02:43.401053   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I1014 15:02:43.401068   71679 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.831164682s)
	I1014 15:02:43.401082   71679 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1014 15:02:43.401131   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:43.401139   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1014 15:02:41.809862   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:43.810054   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:45.810567   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:43.432054   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:43.932139   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:44.432261   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:44.932517   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:45.432959   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:45.933103   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:46.432845   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:46.932825   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:47.432059   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:47.932745   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:44.639020   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:47.136927   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:49.137423   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:46.799144   71679 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.397987929s)
	I1014 15:02:46.799198   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1014 15:02:46.799201   71679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.398044957s)
	I1014 15:02:46.799222   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1014 15:02:46.799249   71679 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I1014 15:02:46.799295   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1014 15:02:46.799296   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I1014 15:02:46.804398   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1014 15:02:48.971377   71679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.171989764s)
	I1014 15:02:48.971409   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I1014 15:02:48.971436   71679 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1014 15:02:48.971481   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1014 15:02:48.309980   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:50.311361   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:48.432869   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:48.932514   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:49.432754   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:49.932514   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:50.432199   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:50.932861   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:51.432404   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:51.932097   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:52.432569   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:52.933078   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:51.141481   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:53.638306   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:50.935341   71679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.963834471s)
	I1014 15:02:50.935373   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I1014 15:02:50.935401   71679 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1014 15:02:50.935452   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1014 15:02:51.683211   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1014 15:02:51.683268   71679 cache_images.go:123] Successfully loaded all cached images
	I1014 15:02:51.683277   71679 cache_images.go:92] duration metric: took 15.097525447s to LoadCachedImages
	I1014 15:02:51.683293   71679 kubeadm.go:934] updating node { 192.168.61.13 8443 v1.31.1 crio true true} ...
	I1014 15:02:51.683441   71679 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-813300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 15:02:51.683525   71679 ssh_runner.go:195] Run: crio config
	I1014 15:02:51.737769   71679 cni.go:84] Creating CNI manager for ""
	I1014 15:02:51.737790   71679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:02:51.737799   71679 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 15:02:51.737818   71679 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.13 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-813300 NodeName:no-preload-813300 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 15:02:51.737955   71679 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-813300"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.13"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.13"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 15:02:51.738019   71679 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 15:02:51.749175   71679 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 15:02:51.749241   71679 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 15:02:51.759120   71679 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1014 15:02:51.777293   71679 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 15:02:51.795073   71679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I1014 15:02:51.815094   71679 ssh_runner.go:195] Run: grep 192.168.61.13	control-plane.minikube.internal$ /etc/hosts
	I1014 15:02:51.819087   71679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:02:51.831806   71679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:02:51.953191   71679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:02:51.972342   71679 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300 for IP: 192.168.61.13
	I1014 15:02:51.972362   71679 certs.go:194] generating shared ca certs ...
	I1014 15:02:51.972379   71679 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:02:51.972534   71679 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 15:02:51.972583   71679 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 15:02:51.972597   71679 certs.go:256] generating profile certs ...
	I1014 15:02:51.972732   71679 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/client.key
	I1014 15:02:51.972822   71679 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/apiserver.key.4d535e2d
	I1014 15:02:51.972885   71679 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/proxy-client.key
	I1014 15:02:51.973064   71679 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 15:02:51.973102   71679 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 15:02:51.973111   71679 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 15:02:51.973151   71679 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 15:02:51.973180   71679 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 15:02:51.973203   71679 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 15:02:51.973260   71679 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:02:51.974077   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 15:02:52.019451   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 15:02:52.048323   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 15:02:52.086241   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 15:02:52.129342   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 15:02:52.157243   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 15:02:52.189093   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 15:02:52.214980   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 15:02:52.241595   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 15:02:52.270329   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 15:02:52.295153   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 15:02:52.321303   71679 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 15:02:52.339181   71679 ssh_runner.go:195] Run: openssl version
	I1014 15:02:52.345152   71679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 15:02:52.357167   71679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:02:52.362387   71679 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:02:52.362442   71679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:02:52.369003   71679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 15:02:52.380917   71679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 15:02:52.392884   71679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 15:02:52.397876   71679 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 15:02:52.397942   71679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 15:02:52.404038   71679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 15:02:52.415841   71679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 15:02:52.426973   71679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 15:02:52.431848   71679 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 15:02:52.431914   71679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 15:02:52.439851   71679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 15:02:52.455014   71679 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 15:02:52.460088   71679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 15:02:52.466495   71679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 15:02:52.472659   71679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 15:02:52.483107   71679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 15:02:52.491272   71679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 15:02:52.497692   71679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 15:02:52.504352   71679 kubeadm.go:392] StartCluster: {Name:no-preload-813300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.13 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 15:02:52.504456   71679 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 15:02:52.504502   71679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:02:52.544010   71679 cri.go:89] found id: ""
	I1014 15:02:52.544074   71679 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 15:02:52.554296   71679 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1014 15:02:52.554314   71679 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1014 15:02:52.554364   71679 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 15:02:52.564193   71679 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 15:02:52.565367   71679 kubeconfig.go:125] found "no-preload-813300" server: "https://192.168.61.13:8443"
	I1014 15:02:52.567519   71679 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 15:02:52.577268   71679 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.13
	I1014 15:02:52.577296   71679 kubeadm.go:1160] stopping kube-system containers ...
	I1014 15:02:52.577305   71679 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1014 15:02:52.577343   71679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:02:52.614462   71679 cri.go:89] found id: ""
	I1014 15:02:52.614551   71679 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1014 15:02:52.631835   71679 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:02:52.642314   71679 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:02:52.642334   71679 kubeadm.go:157] found existing configuration files:
	
	I1014 15:02:52.642378   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:02:52.652036   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:02:52.652114   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:02:52.662263   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:02:52.672145   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:02:52.672214   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:02:52.682085   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:02:52.691628   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:02:52.691706   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:02:52.701314   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:02:52.711232   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:02:52.711291   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:02:52.722480   71679 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:02:52.733359   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:52.849407   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:53.647528   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:53.863718   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:53.938091   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:54.046445   71679 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:02:54.046544   71679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:54.546715   71679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:55.047285   71679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:55.062239   71679 api_server.go:72] duration metric: took 1.015804644s to wait for apiserver process to appear ...
	I1014 15:02:55.062265   71679 api_server.go:88] waiting for apiserver healthz status ...
	I1014 15:02:55.062296   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:02:55.062806   71679 api_server.go:269] stopped: https://192.168.61.13:8443/healthz: Get "https://192.168.61.13:8443/healthz": dial tcp 192.168.61.13:8443: connect: connection refused
	I1014 15:02:52.811186   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:55.309901   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:53.432335   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:53.932860   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:54.433105   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:54.933031   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:55.432058   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:55.932422   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:56.432618   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:56.932727   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:57.432265   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:57.932733   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:56.136357   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:58.136956   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:55.562748   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:02:58.274557   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 15:02:58.274587   71679 api_server.go:103] status: https://192.168.61.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 15:02:58.274625   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:02:58.296655   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 15:02:58.296682   71679 api_server.go:103] status: https://192.168.61.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 15:02:58.563094   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:02:58.567676   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:58.567717   71679 api_server.go:103] status: https://192.168.61.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:02:59.063266   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:02:59.067656   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:59.067697   71679 api_server.go:103] status: https://192.168.61.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:02:59.563300   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:02:59.569667   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:59.569699   71679 api_server.go:103] status: https://192.168.61.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:03:00.063305   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:03:00.067834   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 200:
	ok
	I1014 15:03:00.079522   71679 api_server.go:141] control plane version: v1.31.1
	I1014 15:03:00.079555   71679 api_server.go:131] duration metric: took 5.017283463s to wait for apiserver health ...
	I1014 15:03:00.079565   71679 cni.go:84] Creating CNI manager for ""
	I1014 15:03:00.079572   71679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:03:00.081793   71679 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 15:03:00.083132   71679 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 15:03:00.095329   71679 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 15:03:00.114972   71679 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 15:03:00.148816   71679 system_pods.go:59] 8 kube-system pods found
	I1014 15:03:00.148849   71679 system_pods.go:61] "coredns-7c65d6cfc9-5cft7" [43bb92da-74e8-4430-a889-3c23ed3fef67] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 15:03:00.148859   71679 system_pods.go:61] "etcd-no-preload-813300" [c3e9137c-855e-49e2-8891-8df57707f75a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 15:03:00.148867   71679 system_pods.go:61] "kube-apiserver-no-preload-813300" [683c2d48-6c84-470c-96e5-0706a1884ee7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 15:03:00.148872   71679 system_pods.go:61] "kube-controller-manager-no-preload-813300" [405991ef-9b48-4770-ba31-a213f0eae077] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 15:03:00.148882   71679 system_pods.go:61] "kube-proxy-jd4t4" [6c5c517b-855e-440c-976e-9c5e5d0710f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1014 15:03:00.148887   71679 system_pods.go:61] "kube-scheduler-no-preload-813300" [e76569e6-74c8-44dd-b283-a82072226686] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 15:03:00.148892   71679 system_pods.go:61] "metrics-server-6867b74b74-br4tl" [5b3425c6-9847-447d-a9ab-076c7cc1634f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:03:00.148896   71679 system_pods.go:61] "storage-provisioner" [2c52e790-afa9-4131-8e28-801eb3f822d5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 15:03:00.148906   71679 system_pods.go:74] duration metric: took 33.908487ms to wait for pod list to return data ...
	I1014 15:03:00.148918   71679 node_conditions.go:102] verifying NodePressure condition ...
	I1014 15:03:00.161000   71679 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 15:03:00.161029   71679 node_conditions.go:123] node cpu capacity is 2
	I1014 15:03:00.161042   71679 node_conditions.go:105] duration metric: took 12.118841ms to run NodePressure ...
	I1014 15:03:00.161067   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:03:00.510702   71679 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1014 15:03:00.515692   71679 kubeadm.go:739] kubelet initialised
	I1014 15:03:00.515715   71679 kubeadm.go:740] duration metric: took 4.986873ms waiting for restarted kubelet to initialise ...
	I1014 15:03:00.515724   71679 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:03:00.521483   71679 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-5cft7" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:57.810518   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:59.811287   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:58.432774   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:58.932666   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:59.433020   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:59.932671   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:00.432717   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:00.932917   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:01.432735   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:01.932668   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:02.432260   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:02.932075   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:00.137257   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:02.137876   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:02.528402   71679 pod_ready.go:103] pod "coredns-7c65d6cfc9-5cft7" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:04.530210   71679 pod_ready.go:93] pod "coredns-7c65d6cfc9-5cft7" in "kube-system" namespace has status "Ready":"True"
	I1014 15:03:04.530241   71679 pod_ready.go:82] duration metric: took 4.008725187s for pod "coredns-7c65d6cfc9-5cft7" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:04.530254   71679 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:02.309134   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:04.311421   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:03.432139   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:03.932241   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:04.432421   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:04.932869   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:05.432972   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:05.933010   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:06.432409   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:06.932778   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:07.432067   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:07.932749   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:04.636760   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:07.136410   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:09.137483   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:06.537318   71679 pod_ready.go:103] pod "etcd-no-preload-813300" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:09.037462   71679 pod_ready.go:103] pod "etcd-no-preload-813300" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:06.810244   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:08.810932   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:10.813334   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:08.432529   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:08.932034   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:09.432042   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:09.933054   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:10.432938   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:10.932661   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:11.432392   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:11.932068   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:12.432066   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:12.932122   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:11.636654   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:13.637819   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:10.536905   71679 pod_ready.go:93] pod "etcd-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:03:10.536932   71679 pod_ready.go:82] duration metric: took 6.006669219s for pod "etcd-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:10.536945   71679 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:12.551283   71679 pod_ready.go:103] pod "kube-apiserver-no-preload-813300" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:13.044142   71679 pod_ready.go:93] pod "kube-apiserver-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:03:13.044166   71679 pod_ready.go:82] duration metric: took 2.507213726s for pod "kube-apiserver-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.044176   71679 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.049176   71679 pod_ready.go:93] pod "kube-controller-manager-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:03:13.049196   71679 pod_ready.go:82] duration metric: took 5.01377ms for pod "kube-controller-manager-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.049206   71679 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-jd4t4" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.053623   71679 pod_ready.go:93] pod "kube-proxy-jd4t4" in "kube-system" namespace has status "Ready":"True"
	I1014 15:03:13.053646   71679 pod_ready.go:82] duration metric: took 4.434586ms for pod "kube-proxy-jd4t4" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.053654   71679 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.559610   71679 pod_ready.go:93] pod "kube-scheduler-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:03:13.559632   71679 pod_ready.go:82] duration metric: took 505.972722ms for pod "kube-scheduler-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.559642   71679 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.309520   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:15.309622   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:13.432556   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:13.932427   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:14.432053   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:14.932460   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:15.432714   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:15.933071   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:16.432567   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:16.932414   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:17.432985   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:17.932960   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:16.136599   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:18.137964   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:15.566234   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:17.567065   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:20.066221   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:17.309837   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:19.310194   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:18.433026   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:18.932015   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:19.432042   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:19.932030   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:20.433050   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:20.932658   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:21.432667   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:21.933045   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:21.933127   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:21.973476   72639 cri.go:89] found id: ""
	I1014 15:03:21.973507   72639 logs.go:282] 0 containers: []
	W1014 15:03:21.973517   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:21.973523   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:21.973584   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:22.011700   72639 cri.go:89] found id: ""
	I1014 15:03:22.011732   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.011742   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:22.011748   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:22.011814   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:22.047721   72639 cri.go:89] found id: ""
	I1014 15:03:22.047744   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.047752   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:22.047762   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:22.047814   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:22.091618   72639 cri.go:89] found id: ""
	I1014 15:03:22.091644   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.091652   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:22.091657   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:22.091706   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:22.129997   72639 cri.go:89] found id: ""
	I1014 15:03:22.130036   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.130047   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:22.130055   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:22.130114   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:22.168024   72639 cri.go:89] found id: ""
	I1014 15:03:22.168053   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.168061   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:22.168067   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:22.168136   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:22.202633   72639 cri.go:89] found id: ""
	I1014 15:03:22.202660   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.202670   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:22.202677   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:22.202739   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:22.238224   72639 cri.go:89] found id: ""
	I1014 15:03:22.238251   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.238259   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:22.238267   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:22.238278   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:22.251940   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:22.251991   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:22.379777   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:22.379799   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:22.379814   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:22.456468   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:22.456507   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:22.495404   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:22.495433   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:20.636995   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:22.637141   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:22.066371   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:24.566023   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:21.809579   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:24.309010   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:25.048061   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:25.068586   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:25.068658   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:25.121199   72639 cri.go:89] found id: ""
	I1014 15:03:25.121228   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.121237   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:25.121243   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:25.121303   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:25.174705   72639 cri.go:89] found id: ""
	I1014 15:03:25.174738   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.174749   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:25.174757   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:25.174815   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:25.236972   72639 cri.go:89] found id: ""
	I1014 15:03:25.237002   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.237013   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:25.237020   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:25.237077   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:25.276443   72639 cri.go:89] found id: ""
	I1014 15:03:25.276473   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.276483   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:25.276489   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:25.276541   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:25.314573   72639 cri.go:89] found id: ""
	I1014 15:03:25.314623   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.314636   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:25.314645   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:25.314708   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:25.357489   72639 cri.go:89] found id: ""
	I1014 15:03:25.357515   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.357525   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:25.357533   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:25.357595   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:25.397504   72639 cri.go:89] found id: ""
	I1014 15:03:25.397527   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.397538   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:25.397546   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:25.397597   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:25.433139   72639 cri.go:89] found id: ""
	I1014 15:03:25.433162   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.433170   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:25.433179   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:25.433193   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:25.448088   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:25.448121   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:25.522377   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:25.522401   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:25.522415   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:25.595505   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:25.595538   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:25.643478   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:25.643511   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:25.137557   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:27.637096   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:27.067425   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:29.565568   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:26.809419   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:29.309193   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:31.310234   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:28.195236   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:28.208612   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:28.208686   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:28.248538   72639 cri.go:89] found id: ""
	I1014 15:03:28.248569   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.248581   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:28.248588   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:28.248652   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:28.286103   72639 cri.go:89] found id: ""
	I1014 15:03:28.286131   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.286143   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:28.286149   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:28.286209   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:28.321335   72639 cri.go:89] found id: ""
	I1014 15:03:28.321371   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.321383   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:28.321391   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:28.321453   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:28.358538   72639 cri.go:89] found id: ""
	I1014 15:03:28.358571   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.358581   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:28.358588   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:28.358661   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:28.397058   72639 cri.go:89] found id: ""
	I1014 15:03:28.397087   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.397099   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:28.397106   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:28.397175   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:28.434010   72639 cri.go:89] found id: ""
	I1014 15:03:28.434032   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.434040   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:28.434045   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:28.434095   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:28.474646   72639 cri.go:89] found id: ""
	I1014 15:03:28.474672   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.474681   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:28.474687   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:28.474736   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:28.512833   72639 cri.go:89] found id: ""
	I1014 15:03:28.512860   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.512871   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:28.512882   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:28.512894   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:28.526233   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:28.526262   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:28.601366   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:28.601393   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:28.601416   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:28.690261   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:28.690300   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:28.734134   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:28.734158   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:31.290184   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:31.303493   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:31.303558   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:31.341521   72639 cri.go:89] found id: ""
	I1014 15:03:31.341552   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.341563   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:31.341569   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:31.341627   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:31.378811   72639 cri.go:89] found id: ""
	I1014 15:03:31.378839   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.378851   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:31.378859   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:31.378922   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:31.416282   72639 cri.go:89] found id: ""
	I1014 15:03:31.416310   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.416321   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:31.416328   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:31.416392   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:31.456089   72639 cri.go:89] found id: ""
	I1014 15:03:31.456123   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.456134   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:31.456142   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:31.456202   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:31.496429   72639 cri.go:89] found id: ""
	I1014 15:03:31.496468   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.496478   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:31.496485   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:31.496548   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:31.535226   72639 cri.go:89] found id: ""
	I1014 15:03:31.535248   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.535256   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:31.535262   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:31.535321   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:31.572580   72639 cri.go:89] found id: ""
	I1014 15:03:31.572608   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.572623   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:31.572631   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:31.572691   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:31.606736   72639 cri.go:89] found id: ""
	I1014 15:03:31.606759   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.606766   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:31.606774   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:31.606785   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:31.646048   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:31.646078   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:31.696818   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:31.696851   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:31.710099   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:31.710128   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:31.787756   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:31.787783   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:31.787798   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:30.136436   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:32.138037   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:34.139660   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:31.566034   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:33.567029   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:33.809434   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:36.309487   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:34.369392   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:34.383263   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:34.383344   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:34.417763   72639 cri.go:89] found id: ""
	I1014 15:03:34.417797   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.417809   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:34.417816   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:34.417890   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:34.453361   72639 cri.go:89] found id: ""
	I1014 15:03:34.453391   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.453402   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:34.453409   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:34.453488   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:34.490878   72639 cri.go:89] found id: ""
	I1014 15:03:34.490905   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.490913   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:34.490919   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:34.490980   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:34.527554   72639 cri.go:89] found id: ""
	I1014 15:03:34.527584   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.527595   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:34.527603   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:34.527655   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:34.564813   72639 cri.go:89] found id: ""
	I1014 15:03:34.564841   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.564851   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:34.564857   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:34.564903   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:34.599899   72639 cri.go:89] found id: ""
	I1014 15:03:34.599930   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.599942   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:34.599949   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:34.600019   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:34.641686   72639 cri.go:89] found id: ""
	I1014 15:03:34.641717   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.641728   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:34.641735   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:34.641794   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:34.681154   72639 cri.go:89] found id: ""
	I1014 15:03:34.681184   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.681195   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:34.681205   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:34.681218   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:34.719638   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:34.719672   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:34.771687   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:34.771722   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:34.785943   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:34.785972   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:34.861821   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:34.861861   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:34.861875   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:37.441605   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:37.456763   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:37.456828   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:37.494176   72639 cri.go:89] found id: ""
	I1014 15:03:37.494202   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.494210   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:37.494216   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:37.494268   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:37.538802   72639 cri.go:89] found id: ""
	I1014 15:03:37.538834   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.538846   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:37.538853   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:37.538913   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:37.586282   72639 cri.go:89] found id: ""
	I1014 15:03:37.586312   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.586322   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:37.586328   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:37.586397   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:37.632673   72639 cri.go:89] found id: ""
	I1014 15:03:37.632698   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.632709   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:37.632715   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:37.632771   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:37.673340   72639 cri.go:89] found id: ""
	I1014 15:03:37.673364   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.673372   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:37.673377   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:37.673427   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:37.718725   72639 cri.go:89] found id: ""
	I1014 15:03:37.718750   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.718758   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:37.718764   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:37.718807   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:37.760560   72639 cri.go:89] found id: ""
	I1014 15:03:37.760587   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.760597   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:37.760605   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:37.760665   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:37.800912   72639 cri.go:89] found id: ""
	I1014 15:03:37.800941   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.800949   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:37.800957   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:37.800968   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:37.815338   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:37.815363   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:37.893018   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:37.893050   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:37.893067   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:37.978315   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:37.978349   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:36.637635   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:39.136295   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:36.065915   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:38.066310   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:38.810020   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:40.810460   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:38.019760   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:38.019788   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:40.570918   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:40.586058   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:40.586122   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:40.623753   72639 cri.go:89] found id: ""
	I1014 15:03:40.623784   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.623795   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:40.623801   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:40.623862   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:40.663909   72639 cri.go:89] found id: ""
	I1014 15:03:40.663937   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.663946   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:40.663953   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:40.664008   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:40.698572   72639 cri.go:89] found id: ""
	I1014 15:03:40.698615   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.698626   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:40.698633   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:40.698683   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:40.734882   72639 cri.go:89] found id: ""
	I1014 15:03:40.734907   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.734914   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:40.734920   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:40.734976   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:40.768429   72639 cri.go:89] found id: ""
	I1014 15:03:40.768455   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.768462   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:40.768468   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:40.768527   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:40.803429   72639 cri.go:89] found id: ""
	I1014 15:03:40.803456   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.803466   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:40.803474   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:40.803535   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:40.842854   72639 cri.go:89] found id: ""
	I1014 15:03:40.842883   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.842905   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:40.842913   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:40.842988   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:40.879638   72639 cri.go:89] found id: ""
	I1014 15:03:40.879661   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.879669   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:40.879677   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:40.879687   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:40.924949   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:40.924983   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:40.976271   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:40.976304   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:40.991492   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:40.991520   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:41.071418   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:41.071439   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:41.071453   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:41.136877   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:43.637356   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:40.566353   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:43.065982   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:45.066405   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:43.310188   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:45.811549   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:43.652387   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:43.666239   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:43.666317   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:43.705726   72639 cri.go:89] found id: ""
	I1014 15:03:43.705752   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.705761   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:43.705766   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:43.705814   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:43.745648   72639 cri.go:89] found id: ""
	I1014 15:03:43.745672   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.745680   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:43.745685   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:43.745731   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:43.783032   72639 cri.go:89] found id: ""
	I1014 15:03:43.783055   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.783063   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:43.783068   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:43.783115   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:43.820582   72639 cri.go:89] found id: ""
	I1014 15:03:43.820607   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.820617   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:43.820623   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:43.820669   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:43.862312   72639 cri.go:89] found id: ""
	I1014 15:03:43.862338   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.862348   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:43.862353   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:43.862404   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:43.898338   72639 cri.go:89] found id: ""
	I1014 15:03:43.898368   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.898379   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:43.898388   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:43.898448   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:43.934682   72639 cri.go:89] found id: ""
	I1014 15:03:43.934709   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.934719   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:43.934726   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:43.934781   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:43.970209   72639 cri.go:89] found id: ""
	I1014 15:03:43.970237   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.970247   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:43.970257   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:43.970269   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:44.024791   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:44.024832   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:44.038431   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:44.038457   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:44.117255   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:44.117291   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:44.117308   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:44.199397   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:44.199436   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:46.739819   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:46.755553   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:46.755625   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:46.797225   72639 cri.go:89] found id: ""
	I1014 15:03:46.797253   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.797265   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:46.797272   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:46.797335   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:46.832999   72639 cri.go:89] found id: ""
	I1014 15:03:46.833025   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.833036   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:46.833043   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:46.833103   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:46.872711   72639 cri.go:89] found id: ""
	I1014 15:03:46.872733   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.872741   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:46.872746   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:46.872795   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:46.909945   72639 cri.go:89] found id: ""
	I1014 15:03:46.909968   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.909977   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:46.909985   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:46.910046   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:46.946036   72639 cri.go:89] found id: ""
	I1014 15:03:46.946067   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.946080   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:46.946087   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:46.946141   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:46.981772   72639 cri.go:89] found id: ""
	I1014 15:03:46.981806   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.981819   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:46.981828   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:46.981896   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:47.022761   72639 cri.go:89] found id: ""
	I1014 15:03:47.022790   72639 logs.go:282] 0 containers: []
	W1014 15:03:47.022800   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:47.022807   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:47.022869   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:47.057368   72639 cri.go:89] found id: ""
	I1014 15:03:47.057392   72639 logs.go:282] 0 containers: []
	W1014 15:03:47.057400   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:47.057408   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:47.057418   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:47.134369   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:47.134408   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:47.179550   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:47.179586   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:47.233317   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:47.233355   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:47.247598   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:47.247629   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:47.321309   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:45.637760   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:48.136826   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:47.067003   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:49.565410   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:48.309520   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:50.812241   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:49.821955   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:49.836907   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:49.836975   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:49.876651   72639 cri.go:89] found id: ""
	I1014 15:03:49.876682   72639 logs.go:282] 0 containers: []
	W1014 15:03:49.876694   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:49.876713   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:49.876781   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:49.913440   72639 cri.go:89] found id: ""
	I1014 15:03:49.913464   72639 logs.go:282] 0 containers: []
	W1014 15:03:49.913473   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:49.913479   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:49.913535   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:49.949352   72639 cri.go:89] found id: ""
	I1014 15:03:49.949383   72639 logs.go:282] 0 containers: []
	W1014 15:03:49.949395   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:49.949402   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:49.949463   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:49.984599   72639 cri.go:89] found id: ""
	I1014 15:03:49.984629   72639 logs.go:282] 0 containers: []
	W1014 15:03:49.984641   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:49.984649   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:49.984709   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:50.028049   72639 cri.go:89] found id: ""
	I1014 15:03:50.028072   72639 logs.go:282] 0 containers: []
	W1014 15:03:50.028083   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:50.028090   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:50.028166   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:50.062272   72639 cri.go:89] found id: ""
	I1014 15:03:50.062294   72639 logs.go:282] 0 containers: []
	W1014 15:03:50.062302   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:50.062308   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:50.062358   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:50.099722   72639 cri.go:89] found id: ""
	I1014 15:03:50.099750   72639 logs.go:282] 0 containers: []
	W1014 15:03:50.099762   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:50.099769   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:50.099830   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:50.139984   72639 cri.go:89] found id: ""
	I1014 15:03:50.140005   72639 logs.go:282] 0 containers: []
	W1014 15:03:50.140013   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:50.140020   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:50.140032   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:50.218467   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:50.218500   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:50.260600   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:50.260635   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:50.313725   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:50.313757   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:50.328431   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:50.328462   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:50.401334   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:52.901787   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:52.917836   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:52.917902   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:52.955387   72639 cri.go:89] found id: ""
	I1014 15:03:52.955418   72639 logs.go:282] 0 containers: []
	W1014 15:03:52.955431   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:52.955440   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:52.955504   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:52.990890   72639 cri.go:89] found id: ""
	I1014 15:03:52.990924   72639 logs.go:282] 0 containers: []
	W1014 15:03:52.990936   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:52.990945   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:52.991004   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:50.636581   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:53.137639   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:51.566403   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:54.066690   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:53.310174   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:55.809402   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:53.032344   72639 cri.go:89] found id: ""
	I1014 15:03:53.032374   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.032384   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:53.032390   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:53.032458   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:53.073501   72639 cri.go:89] found id: ""
	I1014 15:03:53.073527   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.073537   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:53.073544   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:53.073602   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:53.114273   72639 cri.go:89] found id: ""
	I1014 15:03:53.114307   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.114316   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:53.114334   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:53.114389   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:53.155448   72639 cri.go:89] found id: ""
	I1014 15:03:53.155475   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.155484   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:53.155490   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:53.155539   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:53.191304   72639 cri.go:89] found id: ""
	I1014 15:03:53.191338   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.191350   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:53.191357   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:53.191438   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:53.224664   72639 cri.go:89] found id: ""
	I1014 15:03:53.224691   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.224702   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:53.224727   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:53.224744   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:53.275751   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:53.275786   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:53.289275   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:53.289303   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:53.369828   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:53.369855   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:53.369871   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:53.457248   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:53.457285   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:56.003384   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:56.017722   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:56.017782   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:56.056644   72639 cri.go:89] found id: ""
	I1014 15:03:56.056675   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.056686   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:56.056694   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:56.056757   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:56.094482   72639 cri.go:89] found id: ""
	I1014 15:03:56.094507   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.094517   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:56.094524   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:56.094583   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:56.129884   72639 cri.go:89] found id: ""
	I1014 15:03:56.129913   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.129921   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:56.129926   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:56.129974   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:56.167171   72639 cri.go:89] found id: ""
	I1014 15:03:56.167198   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.167206   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:56.167211   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:56.167264   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:56.204400   72639 cri.go:89] found id: ""
	I1014 15:03:56.204433   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.204442   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:56.204447   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:56.204494   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:56.240407   72639 cri.go:89] found id: ""
	I1014 15:03:56.240437   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.240448   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:56.240456   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:56.240517   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:56.277653   72639 cri.go:89] found id: ""
	I1014 15:03:56.277679   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.277687   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:56.277693   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:56.277738   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:56.313423   72639 cri.go:89] found id: ""
	I1014 15:03:56.313451   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.313459   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:56.313468   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:56.313480   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:56.368094   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:56.368133   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:56.382563   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:56.382621   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:56.455106   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:56.455130   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:56.455144   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:56.532288   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:56.532329   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:55.636007   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:57.637196   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:56.566763   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:59.066227   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:58.309184   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:00.309370   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:59.072469   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:59.089024   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:59.089094   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:59.130798   72639 cri.go:89] found id: ""
	I1014 15:03:59.130829   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.130840   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:59.130848   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:59.130908   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:59.167828   72639 cri.go:89] found id: ""
	I1014 15:03:59.167854   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.167864   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:59.167871   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:59.167932   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:59.223482   72639 cri.go:89] found id: ""
	I1014 15:03:59.223509   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.223520   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:59.223528   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:59.223590   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:59.261186   72639 cri.go:89] found id: ""
	I1014 15:03:59.261231   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.261243   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:59.261251   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:59.261314   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:59.296924   72639 cri.go:89] found id: ""
	I1014 15:03:59.296985   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.297000   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:59.297008   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:59.297084   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:59.333891   72639 cri.go:89] found id: ""
	I1014 15:03:59.333915   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.333923   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:59.333929   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:59.333991   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:59.374106   72639 cri.go:89] found id: ""
	I1014 15:03:59.374134   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.374143   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:59.374150   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:59.374222   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:59.412256   72639 cri.go:89] found id: ""
	I1014 15:03:59.412283   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.412291   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:59.412298   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:59.412308   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:59.492869   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:59.492904   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:59.492923   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:59.576441   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:59.576473   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:59.618638   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:59.618668   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:59.671295   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:59.671331   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:02.184689   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:02.197763   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:02.197833   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:02.231709   72639 cri.go:89] found id: ""
	I1014 15:04:02.231734   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.231746   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:02.231753   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:02.231815   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:02.269259   72639 cri.go:89] found id: ""
	I1014 15:04:02.269291   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.269303   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:02.269311   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:02.269390   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:02.305926   72639 cri.go:89] found id: ""
	I1014 15:04:02.305956   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.305967   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:02.305975   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:02.306034   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:02.349516   72639 cri.go:89] found id: ""
	I1014 15:04:02.349544   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.349557   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:02.349563   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:02.349622   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:02.388334   72639 cri.go:89] found id: ""
	I1014 15:04:02.388361   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.388371   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:02.388376   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:02.388428   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:02.422742   72639 cri.go:89] found id: ""
	I1014 15:04:02.422770   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.422781   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:02.422789   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:02.422850   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:02.463686   72639 cri.go:89] found id: ""
	I1014 15:04:02.463710   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.463718   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:02.463724   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:02.463770   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:02.498352   72639 cri.go:89] found id: ""
	I1014 15:04:02.498383   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.498394   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:02.498404   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:02.498418   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:02.512531   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:02.512561   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:02.585331   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:02.585359   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:02.585373   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:02.667376   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:02.667414   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:02.708101   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:02.708133   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:00.136170   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:02.138198   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:01.566456   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:04.066934   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:02.309906   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:04.310009   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:06.310084   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:05.259839   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:05.273102   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:05.273186   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:05.311745   72639 cri.go:89] found id: ""
	I1014 15:04:05.311768   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.311776   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:05.311787   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:05.311834   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:05.349313   72639 cri.go:89] found id: ""
	I1014 15:04:05.349336   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.349344   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:05.349352   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:05.349416   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:05.388003   72639 cri.go:89] found id: ""
	I1014 15:04:05.388026   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.388034   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:05.388039   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:05.388098   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:05.426636   72639 cri.go:89] found id: ""
	I1014 15:04:05.426665   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.426676   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:05.426683   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:05.426745   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:05.461945   72639 cri.go:89] found id: ""
	I1014 15:04:05.461974   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.461983   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:05.461989   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:05.462049   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:05.497099   72639 cri.go:89] found id: ""
	I1014 15:04:05.497130   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.497142   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:05.497149   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:05.497216   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:05.531621   72639 cri.go:89] found id: ""
	I1014 15:04:05.531652   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.531664   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:05.531671   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:05.531729   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:05.568950   72639 cri.go:89] found id: ""
	I1014 15:04:05.568973   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.568983   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:05.568992   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:05.569012   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:05.624806   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:05.624846   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:05.651912   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:05.651961   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:05.740342   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:05.740369   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:05.740384   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:05.817901   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:05.817932   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:04.636643   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:07.137525   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:06.566519   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:08.567458   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:08.809718   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:10.809968   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:08.360267   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:08.373249   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:08.373325   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:08.409485   72639 cri.go:89] found id: ""
	I1014 15:04:08.409520   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.409535   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:08.409542   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:08.409604   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:08.444977   72639 cri.go:89] found id: ""
	I1014 15:04:08.445000   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.445008   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:08.445014   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:08.445061   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:08.478080   72639 cri.go:89] found id: ""
	I1014 15:04:08.478108   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.478117   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:08.478123   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:08.478169   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:08.511510   72639 cri.go:89] found id: ""
	I1014 15:04:08.511536   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.511545   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:08.511552   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:08.511603   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:08.546260   72639 cri.go:89] found id: ""
	I1014 15:04:08.546285   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.546292   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:08.546299   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:08.546347   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:08.582775   72639 cri.go:89] found id: ""
	I1014 15:04:08.582799   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.582810   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:08.582816   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:08.582875   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:08.619208   72639 cri.go:89] found id: ""
	I1014 15:04:08.619231   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.619239   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:08.619244   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:08.619299   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:08.654823   72639 cri.go:89] found id: ""
	I1014 15:04:08.654849   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.654860   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:08.654870   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:08.654885   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:08.704543   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:08.704574   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:08.718111   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:08.718144   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:08.792267   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:08.792290   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:08.792309   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:08.870178   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:08.870210   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:11.409975   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:11.432171   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:11.432243   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:11.468997   72639 cri.go:89] found id: ""
	I1014 15:04:11.469021   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.469030   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:11.469035   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:11.469094   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:11.504312   72639 cri.go:89] found id: ""
	I1014 15:04:11.504337   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.504346   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:11.504354   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:11.504417   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:11.540628   72639 cri.go:89] found id: ""
	I1014 15:04:11.540654   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.540662   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:11.540667   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:11.540729   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:11.576466   72639 cri.go:89] found id: ""
	I1014 15:04:11.576491   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.576498   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:11.576506   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:11.576550   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:11.611466   72639 cri.go:89] found id: ""
	I1014 15:04:11.611501   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.611512   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:11.611519   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:11.611578   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:11.650089   72639 cri.go:89] found id: ""
	I1014 15:04:11.650116   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.650126   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:11.650133   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:11.650191   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:11.686538   72639 cri.go:89] found id: ""
	I1014 15:04:11.686563   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.686571   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:11.686577   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:11.686654   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:11.725494   72639 cri.go:89] found id: ""
	I1014 15:04:11.725517   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.725524   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:11.725532   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:11.725545   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:11.779062   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:11.779102   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:11.792726   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:11.792753   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:11.867945   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:11.867972   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:11.867986   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:11.952299   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:11.952340   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:09.636140   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:11.636455   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:14.136183   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:10.567626   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:13.065875   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:15.066484   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:13.310523   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:15.811094   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:14.493922   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:14.506754   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:14.506817   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:14.540456   72639 cri.go:89] found id: ""
	I1014 15:04:14.540480   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.540489   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:14.540495   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:14.540545   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:14.574819   72639 cri.go:89] found id: ""
	I1014 15:04:14.574843   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.574853   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:14.574859   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:14.574917   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:14.608834   72639 cri.go:89] found id: ""
	I1014 15:04:14.608859   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.608868   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:14.608873   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:14.608920   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:14.644182   72639 cri.go:89] found id: ""
	I1014 15:04:14.644210   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.644218   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:14.644223   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:14.644283   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:14.679113   72639 cri.go:89] found id: ""
	I1014 15:04:14.679145   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.679156   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:14.679164   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:14.679228   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:14.716111   72639 cri.go:89] found id: ""
	I1014 15:04:14.716142   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.716154   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:14.716167   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:14.716220   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:14.755884   72639 cri.go:89] found id: ""
	I1014 15:04:14.755907   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.755915   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:14.755920   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:14.755968   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:14.794167   72639 cri.go:89] found id: ""
	I1014 15:04:14.794195   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.794207   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:14.794217   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:14.794234   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:14.844828   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:14.844864   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:14.859424   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:14.859451   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:14.936660   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:14.936687   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:14.936703   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:15.017034   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:15.017070   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:17.555604   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:17.570628   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:17.570687   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:17.612919   72639 cri.go:89] found id: ""
	I1014 15:04:17.612943   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.612951   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:17.612956   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:17.613002   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:17.651178   72639 cri.go:89] found id: ""
	I1014 15:04:17.651210   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.651220   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:17.651226   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:17.651278   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:17.687923   72639 cri.go:89] found id: ""
	I1014 15:04:17.687955   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.687966   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:17.687973   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:17.688024   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:17.724759   72639 cri.go:89] found id: ""
	I1014 15:04:17.724790   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.724800   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:17.724807   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:17.724866   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:17.760189   72639 cri.go:89] found id: ""
	I1014 15:04:17.760212   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.760220   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:17.760226   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:17.760274   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:17.797517   72639 cri.go:89] found id: ""
	I1014 15:04:17.797541   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.797549   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:17.797554   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:17.797601   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:17.833238   72639 cri.go:89] found id: ""
	I1014 15:04:17.833261   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.833270   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:17.833275   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:17.833321   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:17.868828   72639 cri.go:89] found id: ""
	I1014 15:04:17.868857   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.868865   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:17.868873   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:17.868883   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:17.956972   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:17.957011   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:16.137357   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:18.636865   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:17.067415   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:19.566146   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:18.310380   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:20.809526   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:18.006354   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:18.006390   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:18.056237   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:18.056271   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:18.070763   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:18.070792   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:18.147471   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:20.648238   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:20.661465   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:20.661534   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:20.695869   72639 cri.go:89] found id: ""
	I1014 15:04:20.695894   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.695902   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:20.695907   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:20.695957   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:20.729271   72639 cri.go:89] found id: ""
	I1014 15:04:20.729295   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.729313   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:20.729319   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:20.729364   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:20.767110   72639 cri.go:89] found id: ""
	I1014 15:04:20.767137   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.767147   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:20.767154   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:20.767209   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:20.802752   72639 cri.go:89] found id: ""
	I1014 15:04:20.802781   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.802791   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:20.802798   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:20.802846   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:20.841958   72639 cri.go:89] found id: ""
	I1014 15:04:20.841987   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.841998   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:20.842005   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:20.842066   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:20.878869   72639 cri.go:89] found id: ""
	I1014 15:04:20.878896   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.878907   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:20.878914   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:20.878974   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:20.913802   72639 cri.go:89] found id: ""
	I1014 15:04:20.913838   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.913852   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:20.913861   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:20.913922   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:20.948350   72639 cri.go:89] found id: ""
	I1014 15:04:20.948378   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.948395   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:20.948403   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:20.948416   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:21.001065   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:21.001098   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:21.014427   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:21.014458   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:21.091386   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:21.091412   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:21.091432   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:21.175255   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:21.175299   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:21.137358   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:23.636623   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:22.066415   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:24.066476   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:22.809925   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:25.309528   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:23.718260   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:23.732366   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:23.732445   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:23.767269   72639 cri.go:89] found id: ""
	I1014 15:04:23.767299   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.767311   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:23.767317   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:23.767379   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:23.808502   72639 cri.go:89] found id: ""
	I1014 15:04:23.808532   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.808543   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:23.808550   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:23.808606   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:23.845632   72639 cri.go:89] found id: ""
	I1014 15:04:23.845664   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.845677   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:23.845685   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:23.845753   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:23.880218   72639 cri.go:89] found id: ""
	I1014 15:04:23.880249   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.880261   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:23.880268   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:23.880332   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:23.915674   72639 cri.go:89] found id: ""
	I1014 15:04:23.915697   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.915705   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:23.915710   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:23.915767   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:23.950526   72639 cri.go:89] found id: ""
	I1014 15:04:23.950559   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.950570   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:23.950578   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:23.950656   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:23.986130   72639 cri.go:89] found id: ""
	I1014 15:04:23.986167   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.986178   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:23.986186   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:23.986246   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:24.027112   72639 cri.go:89] found id: ""
	I1014 15:04:24.027141   72639 logs.go:282] 0 containers: []
	W1014 15:04:24.027154   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:24.027165   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:24.027181   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:24.082559   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:24.082610   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:24.096900   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:24.096929   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:24.173293   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:24.173327   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:24.173341   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:24.256921   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:24.256962   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:26.802073   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:26.817307   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:26.817366   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:26.855777   72639 cri.go:89] found id: ""
	I1014 15:04:26.855805   72639 logs.go:282] 0 containers: []
	W1014 15:04:26.855817   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:26.855825   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:26.855876   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:26.892260   72639 cri.go:89] found id: ""
	I1014 15:04:26.892288   72639 logs.go:282] 0 containers: []
	W1014 15:04:26.892300   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:26.892308   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:26.892369   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:26.931066   72639 cri.go:89] found id: ""
	I1014 15:04:26.931103   72639 logs.go:282] 0 containers: []
	W1014 15:04:26.931114   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:26.931122   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:26.931174   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:26.966890   72639 cri.go:89] found id: ""
	I1014 15:04:26.966923   72639 logs.go:282] 0 containers: []
	W1014 15:04:26.966933   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:26.966941   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:26.967002   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:27.001338   72639 cri.go:89] found id: ""
	I1014 15:04:27.001368   72639 logs.go:282] 0 containers: []
	W1014 15:04:27.001379   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:27.001386   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:27.001454   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:27.041798   72639 cri.go:89] found id: ""
	I1014 15:04:27.041830   72639 logs.go:282] 0 containers: []
	W1014 15:04:27.041839   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:27.041844   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:27.041905   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:27.080248   72639 cri.go:89] found id: ""
	I1014 15:04:27.080279   72639 logs.go:282] 0 containers: []
	W1014 15:04:27.080288   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:27.080293   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:27.080341   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:27.116207   72639 cri.go:89] found id: ""
	I1014 15:04:27.116234   72639 logs.go:282] 0 containers: []
	W1014 15:04:27.116242   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:27.116250   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:27.116264   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:27.191149   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:27.191174   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:27.191203   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:27.275771   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:27.275808   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:27.323223   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:27.323254   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:27.375409   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:27.375455   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:26.137156   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:28.637895   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:26.066790   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:28.565208   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:27.810315   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:30.309211   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:29.890408   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:29.904797   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:29.904853   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:29.938655   72639 cri.go:89] found id: ""
	I1014 15:04:29.938685   72639 logs.go:282] 0 containers: []
	W1014 15:04:29.938698   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:29.938705   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:29.938765   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:29.976477   72639 cri.go:89] found id: ""
	I1014 15:04:29.976508   72639 logs.go:282] 0 containers: []
	W1014 15:04:29.976519   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:29.976526   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:29.976583   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:30.014813   72639 cri.go:89] found id: ""
	I1014 15:04:30.014842   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.014853   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:30.014860   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:30.014926   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:30.050804   72639 cri.go:89] found id: ""
	I1014 15:04:30.050833   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.050844   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:30.050854   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:30.050918   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:30.087921   72639 cri.go:89] found id: ""
	I1014 15:04:30.087946   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.087954   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:30.087959   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:30.088016   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:30.125411   72639 cri.go:89] found id: ""
	I1014 15:04:30.125446   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.125458   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:30.125465   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:30.125519   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:30.162067   72639 cri.go:89] found id: ""
	I1014 15:04:30.162099   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.162110   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:30.162118   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:30.162181   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:30.200376   72639 cri.go:89] found id: ""
	I1014 15:04:30.200406   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.200418   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:30.200435   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:30.200451   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:30.279965   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:30.279992   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:30.280007   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:30.364866   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:30.364900   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:30.408808   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:30.408842   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:30.464473   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:30.464507   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:32.980254   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:32.994254   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:32.994320   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:31.136531   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:33.137201   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:30.566228   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:32.567393   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:35.065955   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:32.810349   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:35.308794   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:33.035996   72639 cri.go:89] found id: ""
	I1014 15:04:33.036025   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.036036   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:33.036043   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:33.036103   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:33.077494   72639 cri.go:89] found id: ""
	I1014 15:04:33.077522   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.077531   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:33.077538   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:33.077585   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:33.112666   72639 cri.go:89] found id: ""
	I1014 15:04:33.112695   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.112705   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:33.112711   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:33.112772   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:33.150229   72639 cri.go:89] found id: ""
	I1014 15:04:33.150266   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.150276   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:33.150282   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:33.150336   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:33.186960   72639 cri.go:89] found id: ""
	I1014 15:04:33.186989   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.187001   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:33.187008   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:33.187062   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:33.223596   72639 cri.go:89] found id: ""
	I1014 15:04:33.223631   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.223641   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:33.223647   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:33.223711   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:33.260137   72639 cri.go:89] found id: ""
	I1014 15:04:33.260162   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.260170   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:33.260175   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:33.260228   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:33.298072   72639 cri.go:89] found id: ""
	I1014 15:04:33.298095   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.298103   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:33.298110   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:33.298121   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:33.379587   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:33.379623   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:33.423427   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:33.423456   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:33.474644   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:33.474683   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:33.488324   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:33.488354   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:33.556257   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:36.056955   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:36.072461   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:36.072536   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:36.109467   72639 cri.go:89] found id: ""
	I1014 15:04:36.109493   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.109502   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:36.109509   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:36.109561   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:36.147985   72639 cri.go:89] found id: ""
	I1014 15:04:36.148012   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.148020   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:36.148025   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:36.148071   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:36.183885   72639 cri.go:89] found id: ""
	I1014 15:04:36.183906   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.183914   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:36.183919   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:36.183968   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:36.220994   72639 cri.go:89] found id: ""
	I1014 15:04:36.221025   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.221036   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:36.221044   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:36.221108   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:36.256586   72639 cri.go:89] found id: ""
	I1014 15:04:36.256612   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.256621   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:36.256627   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:36.256683   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:36.293229   72639 cri.go:89] found id: ""
	I1014 15:04:36.293256   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.293265   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:36.293272   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:36.293339   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:36.329254   72639 cri.go:89] found id: ""
	I1014 15:04:36.329279   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.329290   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:36.329297   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:36.329357   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:36.366495   72639 cri.go:89] found id: ""
	I1014 15:04:36.366526   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.366538   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:36.366548   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:36.366561   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:36.420985   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:36.421018   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:36.435532   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:36.435565   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:36.510459   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:36.510484   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:36.510499   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:36.593057   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:36.593094   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:35.637182   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:37.637348   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:37.066334   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:39.566950   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:37.309629   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:39.809500   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:39.138570   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:39.152280   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:39.152342   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:39.186647   72639 cri.go:89] found id: ""
	I1014 15:04:39.186676   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.186687   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:39.186694   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:39.186754   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:39.223560   72639 cri.go:89] found id: ""
	I1014 15:04:39.223586   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.223594   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:39.223599   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:39.223644   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:39.257835   72639 cri.go:89] found id: ""
	I1014 15:04:39.257867   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.257879   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:39.257886   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:39.257947   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:39.294656   72639 cri.go:89] found id: ""
	I1014 15:04:39.294684   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.294692   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:39.294699   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:39.294750   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:39.333474   72639 cri.go:89] found id: ""
	I1014 15:04:39.333503   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.333513   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:39.333520   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:39.333586   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:39.374385   72639 cri.go:89] found id: ""
	I1014 15:04:39.374414   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.374424   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:39.374435   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:39.374483   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:39.412856   72639 cri.go:89] found id: ""
	I1014 15:04:39.412888   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.412899   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:39.412906   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:39.412966   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:39.463087   72639 cri.go:89] found id: ""
	I1014 15:04:39.463115   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.463127   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:39.463138   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:39.463154   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:39.514309   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:39.514342   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:39.528947   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:39.528972   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:39.603984   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:39.604004   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:39.604016   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:39.685053   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:39.685093   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:42.234178   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:42.247421   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:42.247497   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:42.288496   72639 cri.go:89] found id: ""
	I1014 15:04:42.288521   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.288529   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:42.288535   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:42.288588   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:42.324346   72639 cri.go:89] found id: ""
	I1014 15:04:42.324382   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.324394   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:42.324401   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:42.324469   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:42.362879   72639 cri.go:89] found id: ""
	I1014 15:04:42.362910   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.362922   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:42.362928   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:42.362991   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:42.399347   72639 cri.go:89] found id: ""
	I1014 15:04:42.399375   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.399383   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:42.399389   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:42.399473   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:42.434942   72639 cri.go:89] found id: ""
	I1014 15:04:42.434971   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.434990   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:42.434999   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:42.435063   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:42.470886   72639 cri.go:89] found id: ""
	I1014 15:04:42.470916   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.470928   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:42.470934   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:42.470994   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:42.510713   72639 cri.go:89] found id: ""
	I1014 15:04:42.510742   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.510752   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:42.510758   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:42.510820   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:42.544506   72639 cri.go:89] found id: ""
	I1014 15:04:42.544538   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.544547   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:42.544559   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:42.544570   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:42.588658   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:42.588694   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:42.642165   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:42.642198   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:42.658073   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:42.658110   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:42.730486   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:42.730510   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:42.730524   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:39.637476   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:41.637715   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:44.137654   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:42.065534   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:44.066309   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:41.809932   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:44.309377   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:46.309699   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:45.307806   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:45.321664   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:45.321733   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:45.359670   72639 cri.go:89] found id: ""
	I1014 15:04:45.359697   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.359708   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:45.359715   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:45.359781   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:45.398673   72639 cri.go:89] found id: ""
	I1014 15:04:45.398703   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.398715   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:45.398722   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:45.398784   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:45.441656   72639 cri.go:89] found id: ""
	I1014 15:04:45.441685   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.441697   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:45.441705   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:45.441768   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:45.476159   72639 cri.go:89] found id: ""
	I1014 15:04:45.476188   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.476195   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:45.476201   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:45.476263   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:45.513776   72639 cri.go:89] found id: ""
	I1014 15:04:45.513807   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.513819   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:45.513828   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:45.513894   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:45.550336   72639 cri.go:89] found id: ""
	I1014 15:04:45.550371   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.550382   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:45.550388   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:45.550450   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:45.586668   72639 cri.go:89] found id: ""
	I1014 15:04:45.586697   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.586705   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:45.586711   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:45.586760   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:45.622530   72639 cri.go:89] found id: ""
	I1014 15:04:45.622559   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.622568   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:45.622576   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:45.622589   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:45.674471   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:45.674504   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:45.690430   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:45.690463   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:45.772133   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:45.772165   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:45.772181   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:45.859835   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:45.859880   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:46.636239   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:48.637696   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:46.565440   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:48.569076   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:48.309788   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:50.310209   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:48.434011   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:48.448747   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:48.448826   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:48.493642   72639 cri.go:89] found id: ""
	I1014 15:04:48.493668   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.493680   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:48.493687   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:48.493747   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:48.530298   72639 cri.go:89] found id: ""
	I1014 15:04:48.530327   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.530336   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:48.530344   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:48.530403   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:48.566215   72639 cri.go:89] found id: ""
	I1014 15:04:48.566242   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.566252   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:48.566261   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:48.566325   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:48.604528   72639 cri.go:89] found id: ""
	I1014 15:04:48.604553   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.604561   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:48.604566   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:48.604616   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:48.646152   72639 cri.go:89] found id: ""
	I1014 15:04:48.646180   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.646191   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:48.646198   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:48.646257   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:48.682670   72639 cri.go:89] found id: ""
	I1014 15:04:48.682696   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.682704   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:48.682711   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:48.682762   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:48.722292   72639 cri.go:89] found id: ""
	I1014 15:04:48.722318   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.722326   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:48.722335   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:48.722400   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:48.762474   72639 cri.go:89] found id: ""
	I1014 15:04:48.762506   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.762518   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:48.762528   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:48.762553   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:48.776628   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:48.776652   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:48.849904   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:48.849928   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:48.849941   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:48.927033   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:48.927068   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:48.970775   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:48.970807   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:51.521113   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:51.535318   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:51.535389   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:51.582631   72639 cri.go:89] found id: ""
	I1014 15:04:51.582658   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.582666   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:51.582671   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:51.582721   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:51.655323   72639 cri.go:89] found id: ""
	I1014 15:04:51.655362   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.655371   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:51.655376   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:51.655433   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:51.722837   72639 cri.go:89] found id: ""
	I1014 15:04:51.722863   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.722875   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:51.722882   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:51.722939   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:51.759917   72639 cri.go:89] found id: ""
	I1014 15:04:51.759946   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.759957   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:51.759963   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:51.760023   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:51.798656   72639 cri.go:89] found id: ""
	I1014 15:04:51.798689   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.798702   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:51.798711   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:51.798777   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:51.839285   72639 cri.go:89] found id: ""
	I1014 15:04:51.839312   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.839324   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:51.839334   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:51.839391   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:51.876997   72639 cri.go:89] found id: ""
	I1014 15:04:51.877028   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.877038   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:51.877045   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:51.877091   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:51.913991   72639 cri.go:89] found id: ""
	I1014 15:04:51.914020   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.914028   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:51.914036   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:51.914046   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:51.993392   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:51.993427   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:52.039722   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:52.039756   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:52.090901   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:52.090937   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:52.105014   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:52.105052   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:52.175505   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:51.137343   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:53.636660   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:50.575054   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:53.067208   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:52.809933   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:54.810498   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:54.676549   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:54.690113   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:54.690204   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:54.726478   72639 cri.go:89] found id: ""
	I1014 15:04:54.726511   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.726523   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:54.726538   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:54.726611   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:54.764990   72639 cri.go:89] found id: ""
	I1014 15:04:54.765017   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.765025   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:54.765031   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:54.765095   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:54.804779   72639 cri.go:89] found id: ""
	I1014 15:04:54.804808   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.804819   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:54.804828   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:54.804886   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:54.848657   72639 cri.go:89] found id: ""
	I1014 15:04:54.848682   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.848698   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:54.848705   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:54.848765   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:54.886806   72639 cri.go:89] found id: ""
	I1014 15:04:54.886834   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.886845   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:54.886853   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:54.886912   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:54.923297   72639 cri.go:89] found id: ""
	I1014 15:04:54.923323   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.923330   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:54.923335   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:54.923380   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:54.966297   72639 cri.go:89] found id: ""
	I1014 15:04:54.966321   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.966329   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:54.966334   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:54.966382   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:55.012047   72639 cri.go:89] found id: ""
	I1014 15:04:55.012071   72639 logs.go:282] 0 containers: []
	W1014 15:04:55.012079   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:55.012087   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:55.012097   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:55.066031   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:55.066063   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:55.080954   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:55.080981   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:55.159644   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:55.159670   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:55.159683   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:55.243303   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:55.243341   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:57.784555   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:57.799051   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:57.799132   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:57.841084   72639 cri.go:89] found id: ""
	I1014 15:04:57.841108   72639 logs.go:282] 0 containers: []
	W1014 15:04:57.841115   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:57.841121   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:57.841167   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:57.881510   72639 cri.go:89] found id: ""
	I1014 15:04:57.881542   72639 logs.go:282] 0 containers: []
	W1014 15:04:57.881555   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:57.881562   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:57.881624   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:57.916893   72639 cri.go:89] found id: ""
	I1014 15:04:57.916923   72639 logs.go:282] 0 containers: []
	W1014 15:04:57.916934   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:57.916940   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:57.916988   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:57.956991   72639 cri.go:89] found id: ""
	I1014 15:04:57.957023   72639 logs.go:282] 0 containers: []
	W1014 15:04:57.957036   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:57.957046   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:57.957118   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:57.993765   72639 cri.go:89] found id: ""
	I1014 15:04:57.993792   72639 logs.go:282] 0 containers: []
	W1014 15:04:57.993803   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:57.993809   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:57.993869   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:56.136994   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:58.137736   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:55.566021   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:57.567950   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:00.068276   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:57.310643   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:59.808898   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:58.032044   72639 cri.go:89] found id: ""
	I1014 15:04:58.032085   72639 logs.go:282] 0 containers: []
	W1014 15:04:58.032098   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:58.032107   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:58.032173   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:58.069733   72639 cri.go:89] found id: ""
	I1014 15:04:58.069754   72639 logs.go:282] 0 containers: []
	W1014 15:04:58.069762   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:58.069767   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:58.069813   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:58.105851   72639 cri.go:89] found id: ""
	I1014 15:04:58.105880   72639 logs.go:282] 0 containers: []
	W1014 15:04:58.105891   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:58.105901   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:58.105914   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:58.159922   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:58.159956   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:58.173779   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:58.173802   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:58.253551   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:58.253576   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:58.253591   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:58.342607   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:58.342647   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:00.884705   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:00.900147   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:00.900215   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:00.940372   72639 cri.go:89] found id: ""
	I1014 15:05:00.940402   72639 logs.go:282] 0 containers: []
	W1014 15:05:00.940413   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:00.940420   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:00.940489   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:00.981400   72639 cri.go:89] found id: ""
	I1014 15:05:00.981431   72639 logs.go:282] 0 containers: []
	W1014 15:05:00.981441   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:00.981447   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:00.981517   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:01.021981   72639 cri.go:89] found id: ""
	I1014 15:05:01.022002   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.022011   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:01.022016   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:01.022067   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:01.056976   72639 cri.go:89] found id: ""
	I1014 15:05:01.057005   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.057013   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:01.057020   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:01.057063   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:01.092702   72639 cri.go:89] found id: ""
	I1014 15:05:01.092732   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.092739   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:01.092745   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:01.092803   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:01.128861   72639 cri.go:89] found id: ""
	I1014 15:05:01.128892   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.128902   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:01.128908   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:01.128958   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:01.162672   72639 cri.go:89] found id: ""
	I1014 15:05:01.162702   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.162712   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:01.162719   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:01.162791   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:01.202724   72639 cri.go:89] found id: ""
	I1014 15:05:01.202751   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.202761   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:01.202770   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:01.202785   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:01.280702   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:01.280723   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:01.280735   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:01.362909   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:01.362943   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:01.406737   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:01.406766   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:01.460090   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:01.460125   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:00.636730   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:03.136587   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:02.568415   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:05.066568   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:01.809661   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:04.309079   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:06.309544   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:03.975661   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:03.989811   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:03.989874   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:04.028396   72639 cri.go:89] found id: ""
	I1014 15:05:04.028426   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.028438   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:04.028445   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:04.028499   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:04.065871   72639 cri.go:89] found id: ""
	I1014 15:05:04.065901   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.065912   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:04.065919   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:04.065980   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:04.103155   72639 cri.go:89] found id: ""
	I1014 15:05:04.103184   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.103192   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:04.103198   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:04.103248   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:04.139503   72639 cri.go:89] found id: ""
	I1014 15:05:04.139531   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.139539   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:04.139545   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:04.139601   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:04.171638   72639 cri.go:89] found id: ""
	I1014 15:05:04.171663   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.171671   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:04.171676   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:04.171734   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:04.213720   72639 cri.go:89] found id: ""
	I1014 15:05:04.213751   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.213760   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:04.213766   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:04.213815   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:04.248088   72639 cri.go:89] found id: ""
	I1014 15:05:04.248109   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.248117   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:04.248121   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:04.248183   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:04.286454   72639 cri.go:89] found id: ""
	I1014 15:05:04.286479   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.286487   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:04.286495   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:04.286506   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:04.339564   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:04.339599   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:04.353034   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:04.353061   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:04.432764   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:04.432786   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:04.432797   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:04.514561   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:04.514613   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:07.057507   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:07.072798   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:07.072873   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:07.113672   72639 cri.go:89] found id: ""
	I1014 15:05:07.113694   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.113701   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:07.113706   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:07.113761   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:07.149321   72639 cri.go:89] found id: ""
	I1014 15:05:07.149348   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.149357   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:07.149362   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:07.149416   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:07.185717   72639 cri.go:89] found id: ""
	I1014 15:05:07.185748   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.185760   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:07.185768   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:07.185822   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:07.225747   72639 cri.go:89] found id: ""
	I1014 15:05:07.225772   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.225783   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:07.225791   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:07.225843   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:07.265834   72639 cri.go:89] found id: ""
	I1014 15:05:07.265864   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.265875   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:07.265882   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:07.265944   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:07.300595   72639 cri.go:89] found id: ""
	I1014 15:05:07.300622   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.300631   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:07.300637   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:07.300686   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:07.343249   72639 cri.go:89] found id: ""
	I1014 15:05:07.343280   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.343291   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:07.343298   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:07.343365   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:07.379525   72639 cri.go:89] found id: ""
	I1014 15:05:07.379549   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.379557   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:07.379564   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:07.379576   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:07.393622   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:07.393653   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:07.473973   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:07.473998   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:07.474013   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:07.556937   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:07.556971   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:07.602224   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:07.602249   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:05.137157   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:07.137297   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:09.137708   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:07.066795   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:09.566723   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:08.809562   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:11.309821   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:10.156920   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:10.170971   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:10.171037   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:10.206568   72639 cri.go:89] found id: ""
	I1014 15:05:10.206610   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.206623   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:10.206630   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:10.206689   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:10.249075   72639 cri.go:89] found id: ""
	I1014 15:05:10.249101   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.249110   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:10.249121   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:10.249175   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:10.285620   72639 cri.go:89] found id: ""
	I1014 15:05:10.285649   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.285660   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:10.285667   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:10.285730   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:10.322291   72639 cri.go:89] found id: ""
	I1014 15:05:10.322314   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.322322   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:10.322327   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:10.322379   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:10.356691   72639 cri.go:89] found id: ""
	I1014 15:05:10.356720   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.356730   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:10.356738   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:10.356802   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:10.401192   72639 cri.go:89] found id: ""
	I1014 15:05:10.401223   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.401234   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:10.401242   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:10.401303   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:10.438198   72639 cri.go:89] found id: ""
	I1014 15:05:10.438225   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.438236   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:10.438243   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:10.438380   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:10.474142   72639 cri.go:89] found id: ""
	I1014 15:05:10.474166   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.474174   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:10.474181   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:10.474193   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:10.546549   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:10.546569   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:10.546582   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:10.624235   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:10.624268   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:10.664896   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:10.664926   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:10.719425   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:10.719464   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:11.637824   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:14.139552   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:11.566806   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:14.066803   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:13.809728   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:16.310153   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:13.234162   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:13.247614   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:13.247689   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:13.285040   72639 cri.go:89] found id: ""
	I1014 15:05:13.285068   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.285078   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:13.285086   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:13.285154   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:13.334084   72639 cri.go:89] found id: ""
	I1014 15:05:13.334125   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.334133   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:13.334139   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:13.334204   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:13.369164   72639 cri.go:89] found id: ""
	I1014 15:05:13.369199   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.369211   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:13.369223   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:13.369285   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:13.405202   72639 cri.go:89] found id: ""
	I1014 15:05:13.405232   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.405244   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:13.405252   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:13.405304   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:13.443271   72639 cri.go:89] found id: ""
	I1014 15:05:13.443302   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.443311   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:13.443317   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:13.443369   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:13.483541   72639 cri.go:89] found id: ""
	I1014 15:05:13.483570   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.483580   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:13.483588   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:13.483650   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:13.518580   72639 cri.go:89] found id: ""
	I1014 15:05:13.518622   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.518633   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:13.518641   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:13.518701   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:13.553638   72639 cri.go:89] found id: ""
	I1014 15:05:13.553668   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.553678   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:13.553688   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:13.553702   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:13.605379   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:13.605413   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:13.620525   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:13.620556   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:13.699628   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:13.699658   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:13.699672   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:13.778006   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:13.778046   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:16.316703   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:16.331511   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:16.331577   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:16.367045   72639 cri.go:89] found id: ""
	I1014 15:05:16.367075   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.367083   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:16.367089   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:16.367144   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:16.403240   72639 cri.go:89] found id: ""
	I1014 15:05:16.403264   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.403274   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:16.403285   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:16.403344   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:16.438570   72639 cri.go:89] found id: ""
	I1014 15:05:16.438612   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.438625   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:16.438632   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:16.438694   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:16.477153   72639 cri.go:89] found id: ""
	I1014 15:05:16.477174   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.477182   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:16.477187   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:16.477232   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:16.516308   72639 cri.go:89] found id: ""
	I1014 15:05:16.516336   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.516348   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:16.516355   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:16.516421   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:16.551337   72639 cri.go:89] found id: ""
	I1014 15:05:16.551365   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.551375   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:16.551383   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:16.551450   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:16.587073   72639 cri.go:89] found id: ""
	I1014 15:05:16.587105   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.587117   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:16.587125   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:16.587183   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:16.623940   72639 cri.go:89] found id: ""
	I1014 15:05:16.623962   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.623970   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:16.623978   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:16.623989   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:16.671593   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:16.671618   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:16.723057   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:16.723092   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:16.737623   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:16.737656   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:16.809539   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:16.809569   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:16.809592   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:16.636818   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:18.637340   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:16.566523   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:19.065985   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:18.809554   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:21.309691   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:19.390406   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:19.404850   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:19.404928   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:19.446931   72639 cri.go:89] found id: ""
	I1014 15:05:19.446962   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.446973   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:19.446980   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:19.447043   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:19.488112   72639 cri.go:89] found id: ""
	I1014 15:05:19.488136   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.488144   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:19.488150   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:19.488208   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:19.523333   72639 cri.go:89] found id: ""
	I1014 15:05:19.523365   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.523382   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:19.523389   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:19.523447   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:19.557887   72639 cri.go:89] found id: ""
	I1014 15:05:19.557910   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.557918   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:19.557927   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:19.557972   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:19.593792   72639 cri.go:89] found id: ""
	I1014 15:05:19.593815   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.593822   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:19.593873   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:19.593922   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:19.628291   72639 cri.go:89] found id: ""
	I1014 15:05:19.628324   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.628335   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:19.628343   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:19.628405   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:19.664088   72639 cri.go:89] found id: ""
	I1014 15:05:19.664118   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.664130   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:19.664138   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:19.664211   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:19.700825   72639 cri.go:89] found id: ""
	I1014 15:05:19.700853   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.700863   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:19.700873   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:19.700886   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:19.741631   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:19.741666   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:19.792667   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:19.792706   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:19.806928   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:19.806965   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:19.880030   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:19.880059   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:19.880073   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:22.465251   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:22.479031   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:22.479096   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:22.519123   72639 cri.go:89] found id: ""
	I1014 15:05:22.519147   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.519158   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:22.519171   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:22.519235   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:22.552250   72639 cri.go:89] found id: ""
	I1014 15:05:22.552277   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.552287   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:22.552294   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:22.552354   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:22.594213   72639 cri.go:89] found id: ""
	I1014 15:05:22.594243   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.594253   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:22.594261   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:22.594310   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:22.630081   72639 cri.go:89] found id: ""
	I1014 15:05:22.630110   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.630121   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:22.630129   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:22.630195   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:22.665454   72639 cri.go:89] found id: ""
	I1014 15:05:22.665485   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.665497   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:22.665505   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:22.665568   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:22.710697   72639 cri.go:89] found id: ""
	I1014 15:05:22.710725   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.710734   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:22.710742   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:22.710798   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:22.748486   72639 cri.go:89] found id: ""
	I1014 15:05:22.748516   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.748527   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:22.748534   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:22.748594   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:22.784646   72639 cri.go:89] found id: ""
	I1014 15:05:22.784674   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.784684   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:22.784695   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:22.784709   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:22.797853   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:22.797880   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:22.875382   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:22.875406   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:22.875422   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:22.957055   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:22.957089   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:20.638448   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:23.137051   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:21.066950   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:23.566775   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:23.309958   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:25.810168   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:23.008642   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:23.008672   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:25.561277   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:25.575543   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:25.575606   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:25.614260   72639 cri.go:89] found id: ""
	I1014 15:05:25.614283   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.614291   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:25.614296   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:25.614353   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:25.654267   72639 cri.go:89] found id: ""
	I1014 15:05:25.654295   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.654307   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:25.654314   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:25.654385   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:25.707597   72639 cri.go:89] found id: ""
	I1014 15:05:25.707626   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.707637   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:25.707644   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:25.707707   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:25.747477   72639 cri.go:89] found id: ""
	I1014 15:05:25.747500   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.747508   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:25.747513   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:25.747571   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:25.785245   72639 cri.go:89] found id: ""
	I1014 15:05:25.785270   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.785279   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:25.785288   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:25.785342   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:25.820619   72639 cri.go:89] found id: ""
	I1014 15:05:25.820643   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.820651   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:25.820665   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:25.820722   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:25.861644   72639 cri.go:89] found id: ""
	I1014 15:05:25.861665   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.861673   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:25.861678   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:25.861724   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:25.901009   72639 cri.go:89] found id: ""
	I1014 15:05:25.901032   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.901046   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:25.901056   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:25.901068   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:25.942918   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:25.942941   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:25.993931   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:25.993964   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:26.008252   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:26.008280   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:26.087316   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:26.087336   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:26.087347   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:25.636727   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:27.637053   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:26.066529   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:28.567224   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:28.308855   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:30.811310   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:28.667377   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:28.682586   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:28.682682   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:28.729576   72639 cri.go:89] found id: ""
	I1014 15:05:28.729600   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.729608   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:28.729614   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:28.729673   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:28.766637   72639 cri.go:89] found id: ""
	I1014 15:05:28.766669   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.766682   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:28.766690   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:28.766762   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:28.802280   72639 cri.go:89] found id: ""
	I1014 15:05:28.802308   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.802317   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:28.802322   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:28.802395   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:28.840788   72639 cri.go:89] found id: ""
	I1014 15:05:28.840822   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.840833   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:28.840841   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:28.840898   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:28.878403   72639 cri.go:89] found id: ""
	I1014 15:05:28.878437   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.878447   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:28.878453   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:28.878505   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:28.919054   72639 cri.go:89] found id: ""
	I1014 15:05:28.919082   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.919090   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:28.919096   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:28.919146   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:28.955097   72639 cri.go:89] found id: ""
	I1014 15:05:28.955124   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.955134   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:28.955142   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:28.955214   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:28.995681   72639 cri.go:89] found id: ""
	I1014 15:05:28.995711   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.995722   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:28.995731   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:28.995746   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:29.073041   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:29.073066   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:29.073083   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:29.152803   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:29.152838   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:29.192205   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:29.192239   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:29.248128   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:29.248166   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:31.762647   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:31.776372   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:31.776454   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:31.812234   72639 cri.go:89] found id: ""
	I1014 15:05:31.812259   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.812268   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:31.812275   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:31.812347   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:31.850248   72639 cri.go:89] found id: ""
	I1014 15:05:31.850277   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.850294   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:31.850301   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:31.850363   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:31.887768   72639 cri.go:89] found id: ""
	I1014 15:05:31.887796   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.887808   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:31.887816   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:31.887870   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:31.923434   72639 cri.go:89] found id: ""
	I1014 15:05:31.923464   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.923476   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:31.923483   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:31.923547   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:31.961027   72639 cri.go:89] found id: ""
	I1014 15:05:31.961055   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.961066   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:31.961073   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:31.961135   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:31.996222   72639 cri.go:89] found id: ""
	I1014 15:05:31.996250   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.996260   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:31.996267   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:31.996329   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:32.034396   72639 cri.go:89] found id: ""
	I1014 15:05:32.034441   72639 logs.go:282] 0 containers: []
	W1014 15:05:32.034452   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:32.034460   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:32.034528   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:32.080105   72639 cri.go:89] found id: ""
	I1014 15:05:32.080142   72639 logs.go:282] 0 containers: []
	W1014 15:05:32.080153   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:32.080164   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:32.080178   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:32.161120   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:32.161151   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:32.213511   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:32.213546   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:32.271250   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:32.271287   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:32.285452   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:32.285483   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:32.366108   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:30.136896   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:32.138906   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:31.066229   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:33.066370   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:35.067821   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:33.309846   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:35.310713   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:34.867317   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:34.882058   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:34.882125   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:34.926220   72639 cri.go:89] found id: ""
	I1014 15:05:34.926251   72639 logs.go:282] 0 containers: []
	W1014 15:05:34.926261   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:34.926268   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:34.926341   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:34.965657   72639 cri.go:89] found id: ""
	I1014 15:05:34.965691   72639 logs.go:282] 0 containers: []
	W1014 15:05:34.965702   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:34.965709   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:34.965775   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:35.002422   72639 cri.go:89] found id: ""
	I1014 15:05:35.002446   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.002454   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:35.002459   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:35.002523   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:35.040029   72639 cri.go:89] found id: ""
	I1014 15:05:35.040057   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.040067   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:35.040073   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:35.040137   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:35.077041   72639 cri.go:89] found id: ""
	I1014 15:05:35.077067   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.077075   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:35.077080   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:35.077129   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:35.113723   72639 cri.go:89] found id: ""
	I1014 15:05:35.113754   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.113763   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:35.113770   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:35.113854   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:35.152003   72639 cri.go:89] found id: ""
	I1014 15:05:35.152025   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.152033   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:35.152038   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:35.152084   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:35.186707   72639 cri.go:89] found id: ""
	I1014 15:05:35.186735   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.186746   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:35.186756   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:35.186769   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:35.267899   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:35.267941   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:35.310382   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:35.310414   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:35.364811   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:35.364852   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:35.378359   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:35.378386   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:35.453522   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:37.953807   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:37.967515   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:37.967579   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:34.637257   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:37.137643   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:37.566344   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:39.566704   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:37.810414   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:40.308798   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:38.007923   72639 cri.go:89] found id: ""
	I1014 15:05:38.007955   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.007964   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:38.007969   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:38.008023   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:38.047451   72639 cri.go:89] found id: ""
	I1014 15:05:38.047476   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.047484   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:38.047490   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:38.047542   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:38.087141   72639 cri.go:89] found id: ""
	I1014 15:05:38.087165   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.087174   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:38.087186   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:38.087234   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:38.126556   72639 cri.go:89] found id: ""
	I1014 15:05:38.126583   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.126604   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:38.126612   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:38.126670   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:38.165318   72639 cri.go:89] found id: ""
	I1014 15:05:38.165341   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.165350   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:38.165356   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:38.165400   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:38.199498   72639 cri.go:89] found id: ""
	I1014 15:05:38.199533   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.199544   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:38.199553   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:38.199618   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:38.235030   72639 cri.go:89] found id: ""
	I1014 15:05:38.235058   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.235067   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:38.235072   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:38.235129   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:38.268900   72639 cri.go:89] found id: ""
	I1014 15:05:38.268926   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.268935   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:38.268943   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:38.268957   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:38.282503   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:38.282532   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:38.357943   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:38.357972   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:38.357987   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:38.448417   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:38.448453   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:38.490023   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:38.490049   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:41.045691   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:41.061188   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:41.061251   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:41.102885   72639 cri.go:89] found id: ""
	I1014 15:05:41.102909   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.102917   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:41.102923   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:41.102971   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:41.139402   72639 cri.go:89] found id: ""
	I1014 15:05:41.139427   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.139437   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:41.139444   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:41.139501   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:41.179881   72639 cri.go:89] found id: ""
	I1014 15:05:41.179926   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.179939   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:41.179946   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:41.180008   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:41.215861   72639 cri.go:89] found id: ""
	I1014 15:05:41.215897   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.215910   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:41.215919   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:41.215987   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:41.251314   72639 cri.go:89] found id: ""
	I1014 15:05:41.251341   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.251351   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:41.251355   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:41.251404   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:41.285986   72639 cri.go:89] found id: ""
	I1014 15:05:41.286010   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.286017   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:41.286025   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:41.286071   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:41.323730   72639 cri.go:89] found id: ""
	I1014 15:05:41.323756   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.323764   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:41.323769   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:41.323816   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:41.360787   72639 cri.go:89] found id: ""
	I1014 15:05:41.360817   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.360825   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:41.360834   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:41.360847   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:41.403137   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:41.403172   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:41.459217   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:41.459253   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:41.473529   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:41.473558   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:41.547384   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:41.547405   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:41.547416   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:39.637477   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:42.137176   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:41.569245   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:44.066760   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:42.309212   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:44.310281   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:44.129494   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:44.144061   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:44.144129   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:44.185872   72639 cri.go:89] found id: ""
	I1014 15:05:44.185896   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.185904   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:44.185909   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:44.185955   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:44.222618   72639 cri.go:89] found id: ""
	I1014 15:05:44.222648   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.222658   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:44.222663   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:44.222723   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:44.260730   72639 cri.go:89] found id: ""
	I1014 15:05:44.260761   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.260773   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:44.260780   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:44.260872   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:44.303033   72639 cri.go:89] found id: ""
	I1014 15:05:44.303124   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.303141   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:44.303150   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:44.303223   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:44.344573   72639 cri.go:89] found id: ""
	I1014 15:05:44.344600   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.344609   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:44.344614   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:44.344660   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:44.386091   72639 cri.go:89] found id: ""
	I1014 15:05:44.386122   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.386131   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:44.386137   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:44.386199   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:44.424609   72639 cri.go:89] found id: ""
	I1014 15:05:44.424634   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.424644   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:44.424656   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:44.424724   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:44.463997   72639 cri.go:89] found id: ""
	I1014 15:05:44.464023   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.464033   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:44.464043   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:44.464057   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:44.516883   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:44.516921   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:44.530785   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:44.530820   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:44.605202   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:44.605229   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:44.605245   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:44.685277   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:44.685312   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:47.227851   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:47.242737   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:47.242817   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:47.279395   72639 cri.go:89] found id: ""
	I1014 15:05:47.279421   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.279428   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:47.279434   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:47.279495   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:47.315002   72639 cri.go:89] found id: ""
	I1014 15:05:47.315032   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.315043   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:47.315050   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:47.315120   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:47.354133   72639 cri.go:89] found id: ""
	I1014 15:05:47.354162   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.354173   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:47.354180   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:47.354245   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:47.389394   72639 cri.go:89] found id: ""
	I1014 15:05:47.389419   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.389427   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:47.389439   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:47.389498   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:47.426564   72639 cri.go:89] found id: ""
	I1014 15:05:47.426592   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.426619   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:47.426627   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:47.426676   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:47.466953   72639 cri.go:89] found id: ""
	I1014 15:05:47.466980   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.466989   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:47.466996   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:47.467065   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:47.508563   72639 cri.go:89] found id: ""
	I1014 15:05:47.508595   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.508605   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:47.508613   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:47.508665   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:47.548974   72639 cri.go:89] found id: ""
	I1014 15:05:47.549002   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.549012   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:47.549022   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:47.549036   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:47.604768   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:47.604799   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:47.619681   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:47.619717   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:47.692479   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:47.692506   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:47.692522   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:47.773711   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:47.773751   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:44.637916   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:47.137070   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:46.566472   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:48.566743   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:46.809406   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:48.811359   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:51.309691   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:50.314509   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:50.330883   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:50.330958   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:50.375090   72639 cri.go:89] found id: ""
	I1014 15:05:50.375121   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.375133   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:50.375140   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:50.375201   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:50.415000   72639 cri.go:89] found id: ""
	I1014 15:05:50.415031   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.415041   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:50.415048   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:50.415099   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:50.453937   72639 cri.go:89] found id: ""
	I1014 15:05:50.453967   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.453976   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:50.453983   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:50.454047   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:50.498752   72639 cri.go:89] found id: ""
	I1014 15:05:50.498778   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.498785   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:50.498790   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:50.498858   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:50.537819   72639 cri.go:89] found id: ""
	I1014 15:05:50.537855   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.537864   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:50.537871   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:50.537920   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:50.577141   72639 cri.go:89] found id: ""
	I1014 15:05:50.577168   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.577179   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:50.577186   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:50.577250   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:50.612462   72639 cri.go:89] found id: ""
	I1014 15:05:50.612504   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.612527   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:50.612535   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:50.612597   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:50.648816   72639 cri.go:89] found id: ""
	I1014 15:05:50.648845   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.648855   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:50.648866   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:50.648879   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:50.662546   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:50.662578   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:50.733128   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:50.733152   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:50.733166   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:50.810884   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:50.810913   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:50.855878   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:50.855905   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:49.637103   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:52.137615   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:50.567300   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:53.066883   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:53.810090   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:56.312861   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:53.413608   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:53.428380   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:53.428453   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:53.463440   72639 cri.go:89] found id: ""
	I1014 15:05:53.463464   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.463473   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:53.463479   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:53.463534   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:53.499024   72639 cri.go:89] found id: ""
	I1014 15:05:53.499050   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.499058   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:53.499064   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:53.499121   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:53.534396   72639 cri.go:89] found id: ""
	I1014 15:05:53.534425   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.534435   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:53.534442   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:53.534504   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:53.571396   72639 cri.go:89] found id: ""
	I1014 15:05:53.571422   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.571432   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:53.571439   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:53.571496   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:53.606219   72639 cri.go:89] found id: ""
	I1014 15:05:53.606247   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.606254   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:53.606260   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:53.606309   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:53.644906   72639 cri.go:89] found id: ""
	I1014 15:05:53.644929   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.644938   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:53.644945   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:53.645005   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:53.684764   72639 cri.go:89] found id: ""
	I1014 15:05:53.684795   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.684808   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:53.684817   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:53.684872   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:53.720559   72639 cri.go:89] found id: ""
	I1014 15:05:53.720587   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.720596   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:53.720605   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:53.720626   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:53.773759   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:53.773798   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:53.787688   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:53.787717   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:53.863141   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:53.863163   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:53.863176   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:53.942949   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:53.942989   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:56.487207   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:56.500670   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:56.500730   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:56.533851   72639 cri.go:89] found id: ""
	I1014 15:05:56.533882   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.533894   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:56.533901   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:56.533964   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:56.573169   72639 cri.go:89] found id: ""
	I1014 15:05:56.573194   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.573201   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:56.573207   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:56.573260   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:56.608110   72639 cri.go:89] found id: ""
	I1014 15:05:56.608138   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.608151   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:56.608158   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:56.608218   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:56.646030   72639 cri.go:89] found id: ""
	I1014 15:05:56.646054   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.646061   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:56.646067   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:56.646112   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:56.689427   72639 cri.go:89] found id: ""
	I1014 15:05:56.689455   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.689465   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:56.689473   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:56.689528   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:56.723831   72639 cri.go:89] found id: ""
	I1014 15:05:56.723856   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.723865   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:56.723871   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:56.723928   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:56.756700   72639 cri.go:89] found id: ""
	I1014 15:05:56.756725   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.756734   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:56.756741   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:56.756808   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:56.788201   72639 cri.go:89] found id: ""
	I1014 15:05:56.788228   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.788235   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:56.788242   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:56.788253   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:56.847840   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:56.847876   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:56.861984   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:56.862016   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:56.933190   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:56.933214   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:56.933226   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:57.015909   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:57.015958   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:54.636591   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:56.638712   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:59.137008   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:55.566153   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:57.566963   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:00.067261   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:58.810164   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:00.811078   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:59.559421   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:59.575593   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:59.575673   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:59.611369   72639 cri.go:89] found id: ""
	I1014 15:05:59.611399   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.611409   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:59.611416   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:59.611485   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:59.645786   72639 cri.go:89] found id: ""
	I1014 15:05:59.645817   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.645827   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:59.645834   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:59.645895   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:59.681463   72639 cri.go:89] found id: ""
	I1014 15:05:59.681491   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.681499   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:59.681504   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:59.681553   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:59.723738   72639 cri.go:89] found id: ""
	I1014 15:05:59.723767   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.723775   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:59.723782   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:59.723845   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:59.763890   72639 cri.go:89] found id: ""
	I1014 15:05:59.763919   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.763958   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:59.763966   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:59.764027   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:59.802981   72639 cri.go:89] found id: ""
	I1014 15:05:59.803007   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.803015   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:59.803021   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:59.803074   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:59.841887   72639 cri.go:89] found id: ""
	I1014 15:05:59.841916   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.841927   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:59.841934   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:59.841989   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:59.877190   72639 cri.go:89] found id: ""
	I1014 15:05:59.877221   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.877231   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:59.877240   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:59.877254   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:59.890838   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:59.890864   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:59.970122   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:59.970147   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:59.970163   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:00.058994   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:00.059032   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:00.103227   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:00.103262   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:02.655437   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:02.671240   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:02.671307   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:02.708826   72639 cri.go:89] found id: ""
	I1014 15:06:02.708859   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.708871   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:02.708879   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:02.708943   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:02.744504   72639 cri.go:89] found id: ""
	I1014 15:06:02.744535   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.744546   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:02.744553   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:02.744615   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:02.781144   72639 cri.go:89] found id: ""
	I1014 15:06:02.781180   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.781193   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:02.781201   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:02.781281   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:02.819527   72639 cri.go:89] found id: ""
	I1014 15:06:02.819558   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.819567   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:02.819572   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:02.819630   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:02.855653   72639 cri.go:89] found id: ""
	I1014 15:06:02.855683   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.855693   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:02.855700   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:02.855761   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:02.900843   72639 cri.go:89] found id: ""
	I1014 15:06:02.900876   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.900888   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:02.900896   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:02.900961   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:02.941812   72639 cri.go:89] found id: ""
	I1014 15:06:02.941840   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.941851   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:02.941857   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:02.941919   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:02.980213   72639 cri.go:89] found id: ""
	I1014 15:06:02.980238   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.980246   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:02.980253   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:02.980265   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:00.130683   72173 pod_ready.go:82] duration metric: took 4m0.000550021s for pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace to be "Ready" ...
	E1014 15:06:00.130707   72173 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace to be "Ready" (will not retry!)
	I1014 15:06:00.130723   72173 pod_ready.go:39] duration metric: took 4m13.708579322s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:06:00.130753   72173 kubeadm.go:597] duration metric: took 4m21.979284634s to restartPrimaryControlPlane
	W1014 15:06:00.130836   72173 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1014 15:06:00.130870   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 15:06:02.566183   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:05.066638   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:03.309953   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:05.311484   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:03.034263   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:03.034301   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:03.048574   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:03.048606   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:03.121902   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:03.121925   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:03.121939   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:03.197407   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:03.197445   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:05.737723   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:05.751892   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:05.751959   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:05.789209   72639 cri.go:89] found id: ""
	I1014 15:06:05.789235   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.789242   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:05.789247   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:05.789294   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:05.826189   72639 cri.go:89] found id: ""
	I1014 15:06:05.826220   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.826229   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:05.826236   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:05.826344   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:05.864264   72639 cri.go:89] found id: ""
	I1014 15:06:05.864297   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.864308   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:05.864314   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:05.864371   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:05.899697   72639 cri.go:89] found id: ""
	I1014 15:06:05.899724   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.899732   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:05.899737   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:05.899784   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:05.939552   72639 cri.go:89] found id: ""
	I1014 15:06:05.939583   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.939593   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:05.939601   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:05.939668   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:05.999732   72639 cri.go:89] found id: ""
	I1014 15:06:05.999759   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.999770   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:05.999776   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:05.999834   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:06.036228   72639 cri.go:89] found id: ""
	I1014 15:06:06.036259   72639 logs.go:282] 0 containers: []
	W1014 15:06:06.036276   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:06.036284   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:06.036343   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:06.071744   72639 cri.go:89] found id: ""
	I1014 15:06:06.071774   72639 logs.go:282] 0 containers: []
	W1014 15:06:06.071785   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:06.071795   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:06.071808   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:06.125737   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:06.125774   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:06.139150   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:06.139177   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:06.206731   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:06.206757   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:06.206773   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:06.287183   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:06.287218   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:07.565983   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:10.065897   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:07.809832   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:10.309290   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:08.827345   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:08.841290   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:08.841384   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:08.877789   72639 cri.go:89] found id: ""
	I1014 15:06:08.877815   72639 logs.go:282] 0 containers: []
	W1014 15:06:08.877824   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:08.877832   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:08.877895   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:08.912491   72639 cri.go:89] found id: ""
	I1014 15:06:08.912517   72639 logs.go:282] 0 containers: []
	W1014 15:06:08.912525   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:08.912530   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:08.912586   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:08.948727   72639 cri.go:89] found id: ""
	I1014 15:06:08.948755   72639 logs.go:282] 0 containers: []
	W1014 15:06:08.948765   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:08.948773   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:08.948837   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:08.984397   72639 cri.go:89] found id: ""
	I1014 15:06:08.984428   72639 logs.go:282] 0 containers: []
	W1014 15:06:08.984440   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:08.984448   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:08.984498   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:09.019222   72639 cri.go:89] found id: ""
	I1014 15:06:09.019250   72639 logs.go:282] 0 containers: []
	W1014 15:06:09.019260   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:09.019268   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:09.019329   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:09.058309   72639 cri.go:89] found id: ""
	I1014 15:06:09.058335   72639 logs.go:282] 0 containers: []
	W1014 15:06:09.058346   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:09.058353   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:09.058415   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:09.096508   72639 cri.go:89] found id: ""
	I1014 15:06:09.096535   72639 logs.go:282] 0 containers: []
	W1014 15:06:09.096544   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:09.096550   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:09.096599   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:09.134564   72639 cri.go:89] found id: ""
	I1014 15:06:09.134611   72639 logs.go:282] 0 containers: []
	W1014 15:06:09.134624   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:09.134635   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:09.134647   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:09.188220   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:09.188254   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:09.203119   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:09.203149   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:09.279357   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:09.279379   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:09.279390   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:09.364219   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:09.364253   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:11.910976   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:11.926067   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:11.926149   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:11.966238   72639 cri.go:89] found id: ""
	I1014 15:06:11.966271   72639 logs.go:282] 0 containers: []
	W1014 15:06:11.966282   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:11.966289   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:11.966350   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:12.002580   72639 cri.go:89] found id: ""
	I1014 15:06:12.002617   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.002630   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:12.002637   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:12.002698   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:12.037014   72639 cri.go:89] found id: ""
	I1014 15:06:12.037037   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.037046   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:12.037051   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:12.037111   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:12.070937   72639 cri.go:89] found id: ""
	I1014 15:06:12.070957   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.070965   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:12.070970   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:12.071019   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:12.104920   72639 cri.go:89] found id: ""
	I1014 15:06:12.104949   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.104960   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:12.104967   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:12.105026   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:12.142498   72639 cri.go:89] found id: ""
	I1014 15:06:12.142530   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.142544   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:12.142555   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:12.142628   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:12.179590   72639 cri.go:89] found id: ""
	I1014 15:06:12.179613   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.179621   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:12.179627   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:12.179675   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:12.213947   72639 cri.go:89] found id: ""
	I1014 15:06:12.213973   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.213981   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:12.213989   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:12.213998   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:12.268214   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:12.268257   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:12.283561   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:12.283594   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:12.382344   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:12.382367   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:12.382377   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:12.469818   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:12.469854   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:12.066154   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:14.565962   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:12.310167   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:14.810273   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:15.011529   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:15.025355   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:15.025423   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:15.060996   72639 cri.go:89] found id: ""
	I1014 15:06:15.061028   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.061040   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:15.061047   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:15.061120   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:15.103050   72639 cri.go:89] found id: ""
	I1014 15:06:15.103074   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.103082   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:15.103088   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:15.103140   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:15.140095   72639 cri.go:89] found id: ""
	I1014 15:06:15.140122   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.140132   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:15.140139   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:15.140207   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:15.174612   72639 cri.go:89] found id: ""
	I1014 15:06:15.174642   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.174654   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:15.174669   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:15.174737   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:15.209116   72639 cri.go:89] found id: ""
	I1014 15:06:15.209142   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.209152   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:15.209160   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:15.209221   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:15.242857   72639 cri.go:89] found id: ""
	I1014 15:06:15.242885   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.242896   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:15.242902   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:15.242966   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:15.283038   72639 cri.go:89] found id: ""
	I1014 15:06:15.283066   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.283076   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:15.283083   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:15.283144   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:15.319577   72639 cri.go:89] found id: ""
	I1014 15:06:15.319604   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.319612   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:15.319622   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:15.319636   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:15.391485   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:15.391506   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:15.391520   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:15.470140   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:15.470192   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:15.513098   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:15.513132   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:15.568275   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:15.568305   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:17.065956   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:19.566207   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:17.308463   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:19.309185   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:21.310841   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:18.085915   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:18.113889   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:18.113958   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:18.167486   72639 cri.go:89] found id: ""
	I1014 15:06:18.167511   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.167519   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:18.167525   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:18.167568   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:18.230244   72639 cri.go:89] found id: ""
	I1014 15:06:18.230273   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.230283   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:18.230291   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:18.230351   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:18.264223   72639 cri.go:89] found id: ""
	I1014 15:06:18.264252   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.264261   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:18.264268   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:18.264332   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:18.298719   72639 cri.go:89] found id: ""
	I1014 15:06:18.298750   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.298762   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:18.298770   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:18.298843   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:18.335113   72639 cri.go:89] found id: ""
	I1014 15:06:18.335140   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.335147   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:18.335153   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:18.335212   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:18.373690   72639 cri.go:89] found id: ""
	I1014 15:06:18.373721   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.373736   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:18.373743   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:18.373792   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:18.411138   72639 cri.go:89] found id: ""
	I1014 15:06:18.411171   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.411182   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:18.411190   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:18.411250   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:18.451281   72639 cri.go:89] found id: ""
	I1014 15:06:18.451306   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.451314   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:18.451323   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:18.451334   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:18.502141   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:18.502178   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:18.517449   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:18.517476   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:18.586737   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:18.586760   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:18.586776   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:18.670234   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:18.670270   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:21.210200   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:21.222998   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:21.223053   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:21.257132   72639 cri.go:89] found id: ""
	I1014 15:06:21.257160   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.257167   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:21.257174   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:21.257237   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:21.290905   72639 cri.go:89] found id: ""
	I1014 15:06:21.290933   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.290945   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:21.290952   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:21.291007   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:21.331067   72639 cri.go:89] found id: ""
	I1014 15:06:21.331098   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.331108   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:21.331128   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:21.331178   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:21.370042   72639 cri.go:89] found id: ""
	I1014 15:06:21.370069   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.370077   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:21.370083   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:21.370141   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:21.414900   72639 cri.go:89] found id: ""
	I1014 15:06:21.414920   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.414932   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:21.414938   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:21.414985   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:21.452914   72639 cri.go:89] found id: ""
	I1014 15:06:21.452941   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.452952   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:21.452960   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:21.453022   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:21.486725   72639 cri.go:89] found id: ""
	I1014 15:06:21.486752   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.486763   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:21.486770   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:21.486831   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:21.524012   72639 cri.go:89] found id: ""
	I1014 15:06:21.524034   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.524042   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:21.524049   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:21.524059   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:21.603238   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:21.603279   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:21.645655   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:21.645689   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:21.701053   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:21.701092   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:21.715515   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:21.715542   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:21.781831   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:22.067051   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:24.567173   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:21.810342   72390 pod_ready.go:82] duration metric: took 4m0.007657098s for pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace to be "Ready" ...
	E1014 15:06:21.810365   72390 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1014 15:06:21.810382   72390 pod_ready.go:39] duration metric: took 4m7.92113061s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:06:21.810401   72390 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:06:21.810433   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:21.810488   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:21.856565   72390 cri.go:89] found id: "a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f"
	I1014 15:06:21.856587   72390 cri.go:89] found id: ""
	I1014 15:06:21.856594   72390 logs.go:282] 1 containers: [a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f]
	I1014 15:06:21.856654   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:21.861036   72390 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:21.861091   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:21.898486   72390 cri.go:89] found id: "0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69"
	I1014 15:06:21.898517   72390 cri.go:89] found id: ""
	I1014 15:06:21.898528   72390 logs.go:282] 1 containers: [0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69]
	I1014 15:06:21.898587   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:21.903145   72390 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:21.903245   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:21.941127   72390 cri.go:89] found id: "6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1"
	I1014 15:06:21.941164   72390 cri.go:89] found id: ""
	I1014 15:06:21.941173   72390 logs.go:282] 1 containers: [6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1]
	I1014 15:06:21.941232   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:21.945584   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:21.945658   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:21.994370   72390 cri.go:89] found id: "be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa"
	I1014 15:06:21.994398   72390 cri.go:89] found id: ""
	I1014 15:06:21.994407   72390 logs.go:282] 1 containers: [be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa]
	I1014 15:06:21.994454   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:21.998498   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:21.998547   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:22.037415   72390 cri.go:89] found id: "8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42"
	I1014 15:06:22.037443   72390 cri.go:89] found id: ""
	I1014 15:06:22.037453   72390 logs.go:282] 1 containers: [8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42]
	I1014 15:06:22.037507   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:22.041882   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:22.041947   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:22.079219   72390 cri.go:89] found id: "7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4"
	I1014 15:06:22.079243   72390 cri.go:89] found id: ""
	I1014 15:06:22.079252   72390 logs.go:282] 1 containers: [7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4]
	I1014 15:06:22.079319   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:22.083373   72390 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:22.083432   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:22.120795   72390 cri.go:89] found id: ""
	I1014 15:06:22.120818   72390 logs.go:282] 0 containers: []
	W1014 15:06:22.120825   72390 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:22.120832   72390 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1014 15:06:22.120889   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1014 15:06:22.158545   72390 cri.go:89] found id: "54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81"
	I1014 15:06:22.158571   72390 cri.go:89] found id: "48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076"
	I1014 15:06:22.158577   72390 cri.go:89] found id: ""
	I1014 15:06:22.158586   72390 logs.go:282] 2 containers: [54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81 48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076]
	I1014 15:06:22.158662   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:22.162500   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:22.166734   72390 logs.go:123] Gathering logs for storage-provisioner [48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076] ...
	I1014 15:06:22.166759   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076"
	I1014 15:06:22.202711   72390 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:22.202736   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:22.279594   72390 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:22.279635   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:22.293836   72390 logs.go:123] Gathering logs for coredns [6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1] ...
	I1014 15:06:22.293863   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1"
	I1014 15:06:22.335451   72390 logs.go:123] Gathering logs for kube-scheduler [be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa] ...
	I1014 15:06:22.335478   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa"
	I1014 15:06:22.374244   72390 logs.go:123] Gathering logs for kube-proxy [8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42] ...
	I1014 15:06:22.374274   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42"
	I1014 15:06:22.422538   72390 logs.go:123] Gathering logs for kube-controller-manager [7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4] ...
	I1014 15:06:22.422567   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4"
	I1014 15:06:22.486973   72390 logs.go:123] Gathering logs for storage-provisioner [54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81] ...
	I1014 15:06:22.487009   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81"
	I1014 15:06:22.528871   72390 logs.go:123] Gathering logs for container status ...
	I1014 15:06:22.528899   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:22.575947   72390 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:22.575982   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 15:06:22.713356   72390 logs.go:123] Gathering logs for kube-apiserver [a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f] ...
	I1014 15:06:22.713387   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f"
	I1014 15:06:22.760315   72390 logs.go:123] Gathering logs for etcd [0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69] ...
	I1014 15:06:22.760348   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69"
	I1014 15:06:22.811144   72390 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:22.811169   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:25.780847   72390 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:25.800698   72390 api_server.go:72] duration metric: took 4m18.640749756s to wait for apiserver process to appear ...
	I1014 15:06:25.800733   72390 api_server.go:88] waiting for apiserver healthz status ...
	I1014 15:06:25.800779   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:25.800845   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:25.841159   72390 cri.go:89] found id: "a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f"
	I1014 15:06:25.841193   72390 cri.go:89] found id: ""
	I1014 15:06:25.841203   72390 logs.go:282] 1 containers: [a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f]
	I1014 15:06:25.841259   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:25.845503   72390 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:25.845560   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:25.884122   72390 cri.go:89] found id: "0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69"
	I1014 15:06:25.884151   72390 cri.go:89] found id: ""
	I1014 15:06:25.884161   72390 logs.go:282] 1 containers: [0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69]
	I1014 15:06:25.884223   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:25.889638   72390 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:25.889700   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:25.931199   72390 cri.go:89] found id: "6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1"
	I1014 15:06:25.931220   72390 cri.go:89] found id: ""
	I1014 15:06:25.931230   72390 logs.go:282] 1 containers: [6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1]
	I1014 15:06:25.931285   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:25.936063   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:25.936127   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:25.979162   72390 cri.go:89] found id: "be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa"
	I1014 15:06:25.979188   72390 cri.go:89] found id: ""
	I1014 15:06:25.979197   72390 logs.go:282] 1 containers: [be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa]
	I1014 15:06:25.979254   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:25.983550   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:25.983611   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:26.021835   72390 cri.go:89] found id: "8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42"
	I1014 15:06:26.021854   72390 cri.go:89] found id: ""
	I1014 15:06:26.021862   72390 logs.go:282] 1 containers: [8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42]
	I1014 15:06:26.021911   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:26.026005   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:26.026073   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:26.067719   72390 cri.go:89] found id: "7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4"
	I1014 15:06:26.067740   72390 cri.go:89] found id: ""
	I1014 15:06:26.067749   72390 logs.go:282] 1 containers: [7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4]
	I1014 15:06:26.067803   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:26.073387   72390 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:26.073453   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:26.116305   72390 cri.go:89] found id: ""
	I1014 15:06:26.116336   72390 logs.go:282] 0 containers: []
	W1014 15:06:26.116349   72390 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:26.116358   72390 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1014 15:06:26.116427   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1014 15:06:26.156959   72390 cri.go:89] found id: "54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81"
	I1014 15:06:26.156985   72390 cri.go:89] found id: "48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076"
	I1014 15:06:26.156991   72390 cri.go:89] found id: ""
	I1014 15:06:26.156999   72390 logs.go:282] 2 containers: [54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81 48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076]
	I1014 15:06:26.157051   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:26.161437   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:26.165696   72390 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:26.165718   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 15:06:26.282026   72390 logs.go:123] Gathering logs for coredns [6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1] ...
	I1014 15:06:26.282056   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1"
	I1014 15:06:26.333504   72390 logs.go:123] Gathering logs for kube-scheduler [be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa] ...
	I1014 15:06:26.333543   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa"
	I1014 15:06:26.376435   72390 logs.go:123] Gathering logs for storage-provisioner [54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81] ...
	I1014 15:06:26.376469   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81"
	I1014 15:06:26.416633   72390 logs.go:123] Gathering logs for storage-provisioner [48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076] ...
	I1014 15:06:26.416660   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076"
	I1014 15:06:26.388546   72173 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.257645941s)
	I1014 15:06:26.388631   72173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:06:26.407118   72173 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:06:26.417718   72173 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:06:26.428364   72173 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:06:26.428391   72173 kubeadm.go:157] found existing configuration files:
	
	I1014 15:06:26.428451   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:06:26.437953   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:06:26.438026   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:06:26.448356   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:06:26.458476   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:06:26.458541   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:06:26.469941   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:06:26.482934   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:06:26.483016   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:06:26.495682   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:06:26.506113   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:06:26.506176   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:06:26.517784   72173 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 15:06:26.568927   72173 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1014 15:06:26.568978   72173 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 15:06:26.685727   72173 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 15:06:26.685855   72173 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 15:06:26.685963   72173 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 15:06:26.693948   72173 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 15:06:26.696177   72173 out.go:235]   - Generating certificates and keys ...
	I1014 15:06:26.696269   72173 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 15:06:26.696318   72173 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 15:06:26.696388   72173 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 15:06:26.696438   72173 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1014 15:06:26.696495   72173 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 15:06:26.696536   72173 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1014 15:06:26.696588   72173 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1014 15:06:26.696639   72173 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1014 15:06:26.696696   72173 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 15:06:26.696760   72173 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 15:06:26.700275   72173 kubeadm.go:310] [certs] Using the existing "sa" key
	I1014 15:06:26.700406   72173 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 15:06:26.831734   72173 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 15:06:27.336318   72173 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 15:06:27.574604   72173 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 15:06:27.681370   72173 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 15:06:27.788769   72173 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 15:06:27.789324   72173 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 15:06:27.791842   72173 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 15:06:24.282018   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:24.295177   72639 kubeadm.go:597] duration metric: took 4m4.450514459s to restartPrimaryControlPlane
	W1014 15:06:24.295255   72639 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1014 15:06:24.295283   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 15:06:27.793786   72173 out.go:235]   - Booting up control plane ...
	I1014 15:06:27.793891   72173 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 15:06:27.793980   72173 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 15:06:27.794089   72173 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 15:06:27.815223   72173 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 15:06:27.821764   72173 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 15:06:27.821817   72173 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 15:06:27.965327   72173 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 15:06:27.965707   72173 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 15:06:28.967332   72173 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001260991s
	I1014 15:06:28.967473   72173 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1014 15:06:29.238014   72639 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.942706631s)
	I1014 15:06:29.238096   72639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:06:29.258804   72639 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:06:29.269440   72639 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:06:29.279613   72639 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:06:29.279633   72639 kubeadm.go:157] found existing configuration files:
	
	I1014 15:06:29.279672   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:06:29.292840   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:06:29.292912   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:06:29.306987   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:06:29.319896   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:06:29.319970   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:06:29.333974   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:06:29.343993   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:06:29.344051   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:06:29.354691   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:06:29.364354   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:06:29.364422   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:06:29.374674   72639 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 15:06:29.452845   72639 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1014 15:06:29.452961   72639 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 15:06:29.618263   72639 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 15:06:29.618446   72639 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 15:06:29.618582   72639 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1014 15:06:29.813387   72639 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 15:06:29.815501   72639 out.go:235]   - Generating certificates and keys ...
	I1014 15:06:29.815610   72639 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 15:06:29.815697   72639 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 15:06:29.815799   72639 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 15:06:29.815879   72639 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1014 15:06:29.815971   72639 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 15:06:29.816039   72639 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1014 15:06:29.816125   72639 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1014 15:06:29.816206   72639 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1014 15:06:29.816307   72639 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 15:06:29.816404   72639 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 15:06:29.816454   72639 kubeadm.go:310] [certs] Using the existing "sa" key
	I1014 15:06:29.816531   72639 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 15:06:29.944505   72639 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 15:06:30.106467   72639 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 15:06:30.226356   72639 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 15:06:30.322169   72639 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 15:06:30.342382   72639 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 15:06:30.343666   72639 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 15:06:30.343736   72639 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 15:06:30.507000   72639 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 15:06:27.066923   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:29.068434   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:26.453659   72390 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:26.453693   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:26.900485   72390 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:26.900518   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:26.925431   72390 logs.go:123] Gathering logs for kube-apiserver [a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f] ...
	I1014 15:06:26.925461   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f"
	I1014 15:06:26.986104   72390 logs.go:123] Gathering logs for etcd [0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69] ...
	I1014 15:06:26.986140   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69"
	I1014 15:06:27.037557   72390 logs.go:123] Gathering logs for kube-proxy [8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42] ...
	I1014 15:06:27.037600   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42"
	I1014 15:06:27.084362   72390 logs.go:123] Gathering logs for kube-controller-manager [7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4] ...
	I1014 15:06:27.084397   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4"
	I1014 15:06:27.138680   72390 logs.go:123] Gathering logs for container status ...
	I1014 15:06:27.138713   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:27.191283   72390 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:27.191314   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:29.761781   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:06:29.769020   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 200:
	ok
	I1014 15:06:29.770210   72390 api_server.go:141] control plane version: v1.31.1
	I1014 15:06:29.770232   72390 api_server.go:131] duration metric: took 3.969490314s to wait for apiserver health ...
	I1014 15:06:29.770242   72390 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 15:06:29.770268   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:29.770328   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:29.827908   72390 cri.go:89] found id: "a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f"
	I1014 15:06:29.827930   72390 cri.go:89] found id: ""
	I1014 15:06:29.827939   72390 logs.go:282] 1 containers: [a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f]
	I1014 15:06:29.827994   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:29.837786   72390 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:29.837864   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:29.877625   72390 cri.go:89] found id: "0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69"
	I1014 15:06:29.877661   72390 cri.go:89] found id: ""
	I1014 15:06:29.877672   72390 logs.go:282] 1 containers: [0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69]
	I1014 15:06:29.877738   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:29.882502   72390 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:29.882578   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:29.923002   72390 cri.go:89] found id: "6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1"
	I1014 15:06:29.923027   72390 cri.go:89] found id: ""
	I1014 15:06:29.923037   72390 logs.go:282] 1 containers: [6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1]
	I1014 15:06:29.923094   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:29.927559   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:29.927621   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:29.966098   72390 cri.go:89] found id: "be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa"
	I1014 15:06:29.966124   72390 cri.go:89] found id: ""
	I1014 15:06:29.966133   72390 logs.go:282] 1 containers: [be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa]
	I1014 15:06:29.966189   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:29.972287   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:29.972371   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:30.024389   72390 cri.go:89] found id: "8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42"
	I1014 15:06:30.024414   72390 cri.go:89] found id: ""
	I1014 15:06:30.024423   72390 logs.go:282] 1 containers: [8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42]
	I1014 15:06:30.024481   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:30.029914   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:30.029976   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:30.085703   72390 cri.go:89] found id: "7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4"
	I1014 15:06:30.085727   72390 cri.go:89] found id: ""
	I1014 15:06:30.085737   72390 logs.go:282] 1 containers: [7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4]
	I1014 15:06:30.085806   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:30.097004   72390 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:30.097098   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:30.147464   72390 cri.go:89] found id: ""
	I1014 15:06:30.147494   72390 logs.go:282] 0 containers: []
	W1014 15:06:30.147505   72390 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:30.147512   72390 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1014 15:06:30.147573   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1014 15:06:30.195003   72390 cri.go:89] found id: "54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81"
	I1014 15:06:30.195030   72390 cri.go:89] found id: "48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076"
	I1014 15:06:30.195036   72390 cri.go:89] found id: ""
	I1014 15:06:30.195045   72390 logs.go:282] 2 containers: [54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81 48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076]
	I1014 15:06:30.195099   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:30.199436   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:30.204079   72390 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:30.204105   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:30.221021   72390 logs.go:123] Gathering logs for kube-apiserver [a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f] ...
	I1014 15:06:30.221049   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f"
	I1014 15:06:30.280979   72390 logs.go:123] Gathering logs for coredns [6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1] ...
	I1014 15:06:30.281013   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1"
	I1014 15:06:30.339261   72390 logs.go:123] Gathering logs for kube-proxy [8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42] ...
	I1014 15:06:30.339291   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42"
	I1014 15:06:30.390034   72390 logs.go:123] Gathering logs for kube-controller-manager [7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4] ...
	I1014 15:06:30.390081   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4"
	I1014 15:06:30.461221   72390 logs.go:123] Gathering logs for storage-provisioner [54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81] ...
	I1014 15:06:30.461262   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81"
	I1014 15:06:30.504100   72390 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:30.504134   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:30.870561   72390 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:30.870629   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:30.942952   72390 logs.go:123] Gathering logs for container status ...
	I1014 15:06:30.942998   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:30.995435   72390 logs.go:123] Gathering logs for etcd [0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69] ...
	I1014 15:06:30.995484   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69"
	I1014 15:06:31.038804   72390 logs.go:123] Gathering logs for kube-scheduler [be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa] ...
	I1014 15:06:31.038839   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa"
	I1014 15:06:31.080187   72390 logs.go:123] Gathering logs for storage-provisioner [48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076] ...
	I1014 15:06:31.080218   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076"
	I1014 15:06:31.122248   72390 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:31.122295   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 15:06:30.509157   72639 out.go:235]   - Booting up control plane ...
	I1014 15:06:30.509293   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 15:06:30.518440   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 15:06:30.520572   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 15:06:30.522337   72639 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 15:06:30.524996   72639 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1014 15:06:33.742510   72390 system_pods.go:59] 8 kube-system pods found
	I1014 15:06:33.742539   72390 system_pods.go:61] "coredns-7c65d6cfc9-994hx" [b0291ce4-5503-4bb1-8e36-d956b115c3ac] Running
	I1014 15:06:33.742546   72390 system_pods.go:61] "etcd-default-k8s-diff-port-201291" [5e359915-fb2e-46d5-a1a8-826341943fc3] Running
	I1014 15:06:33.742552   72390 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-201291" [047bd813-aaab-428e-ab47-12932195c91f] Running
	I1014 15:06:33.742557   72390 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-201291" [6eb0eb91-21ce-4e56-9758-fbd453b0d4df] Running
	I1014 15:06:33.742562   72390 system_pods.go:61] "kube-proxy-rh82t" [1dcd3c39-1bfe-40ac-a012-ea17ea1dfb6d] Running
	I1014 15:06:33.742566   72390 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-201291" [aaeefd23-6adc-4c69-acca-38e3f3172b2e] Running
	I1014 15:06:33.742576   72390 system_pods.go:61] "metrics-server-6867b74b74-bcrqs" [508697cd-cf31-4078-8985-5c0b77966695] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:06:33.742582   72390 system_pods.go:61] "storage-provisioner" [62925b5e-ec1d-4d5b-aa70-a4fc555db52d] Running
	I1014 15:06:33.742615   72390 system_pods.go:74] duration metric: took 3.972347536s to wait for pod list to return data ...
	I1014 15:06:33.742628   72390 default_sa.go:34] waiting for default service account to be created ...
	I1014 15:06:33.744532   72390 default_sa.go:45] found service account: "default"
	I1014 15:06:33.744551   72390 default_sa.go:55] duration metric: took 1.918153ms for default service account to be created ...
	I1014 15:06:33.744558   72390 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 15:06:33.750292   72390 system_pods.go:86] 8 kube-system pods found
	I1014 15:06:33.750315   72390 system_pods.go:89] "coredns-7c65d6cfc9-994hx" [b0291ce4-5503-4bb1-8e36-d956b115c3ac] Running
	I1014 15:06:33.750320   72390 system_pods.go:89] "etcd-default-k8s-diff-port-201291" [5e359915-fb2e-46d5-a1a8-826341943fc3] Running
	I1014 15:06:33.750324   72390 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-201291" [047bd813-aaab-428e-ab47-12932195c91f] Running
	I1014 15:06:33.750329   72390 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-201291" [6eb0eb91-21ce-4e56-9758-fbd453b0d4df] Running
	I1014 15:06:33.750332   72390 system_pods.go:89] "kube-proxy-rh82t" [1dcd3c39-1bfe-40ac-a012-ea17ea1dfb6d] Running
	I1014 15:06:33.750335   72390 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-201291" [aaeefd23-6adc-4c69-acca-38e3f3172b2e] Running
	I1014 15:06:33.750341   72390 system_pods.go:89] "metrics-server-6867b74b74-bcrqs" [508697cd-cf31-4078-8985-5c0b77966695] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:06:33.750346   72390 system_pods.go:89] "storage-provisioner" [62925b5e-ec1d-4d5b-aa70-a4fc555db52d] Running
	I1014 15:06:33.750352   72390 system_pods.go:126] duration metric: took 5.790549ms to wait for k8s-apps to be running ...
	I1014 15:06:33.750358   72390 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 15:06:33.750398   72390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:06:33.770342   72390 system_svc.go:56] duration metric: took 19.978034ms WaitForService to wait for kubelet
	I1014 15:06:33.770370   72390 kubeadm.go:582] duration metric: took 4m26.610427104s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 15:06:33.770392   72390 node_conditions.go:102] verifying NodePressure condition ...
	I1014 15:06:33.774149   72390 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 15:06:33.774176   72390 node_conditions.go:123] node cpu capacity is 2
	I1014 15:06:33.774190   72390 node_conditions.go:105] duration metric: took 3.792746ms to run NodePressure ...
	I1014 15:06:33.774203   72390 start.go:241] waiting for startup goroutines ...
	I1014 15:06:33.774217   72390 start.go:246] waiting for cluster config update ...
	I1014 15:06:33.774232   72390 start.go:255] writing updated cluster config ...
	I1014 15:06:33.774560   72390 ssh_runner.go:195] Run: rm -f paused
	I1014 15:06:33.823879   72390 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1014 15:06:33.825962   72390 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-201291" cluster and "default" namespace by default
	I1014 15:06:33.976430   72173 kubeadm.go:310] [api-check] The API server is healthy after 5.00773575s
	I1014 15:06:33.990496   72173 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 15:06:34.010821   72173 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 15:06:34.051244   72173 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 15:06:34.051513   72173 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-989166 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 15:06:34.066447   72173 kubeadm.go:310] [bootstrap-token] Using token: 46olqw.t0lfd7bmyz0olhbh
	I1014 15:06:34.067925   72173 out.go:235]   - Configuring RBAC rules ...
	I1014 15:06:34.068073   72173 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 15:06:34.077775   72173 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 15:06:34.097676   72173 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 15:06:34.103212   72173 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 15:06:34.112640   72173 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 15:06:34.119886   72173 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 15:06:34.382372   72173 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 15:06:34.825514   72173 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1014 15:06:35.383856   72173 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1014 15:06:35.383877   72173 kubeadm.go:310] 
	I1014 15:06:35.383939   72173 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1014 15:06:35.383976   72173 kubeadm.go:310] 
	I1014 15:06:35.384094   72173 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1014 15:06:35.384103   72173 kubeadm.go:310] 
	I1014 15:06:35.384136   72173 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1014 15:06:35.384223   72173 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 15:06:35.384286   72173 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 15:06:35.384311   72173 kubeadm.go:310] 
	I1014 15:06:35.384414   72173 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1014 15:06:35.384430   72173 kubeadm.go:310] 
	I1014 15:06:35.384499   72173 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 15:06:35.384512   72173 kubeadm.go:310] 
	I1014 15:06:35.384597   72173 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1014 15:06:35.384685   72173 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 15:06:35.384744   72173 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 15:06:35.384750   72173 kubeadm.go:310] 
	I1014 15:06:35.384821   72173 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 15:06:35.384928   72173 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1014 15:06:35.384940   72173 kubeadm.go:310] 
	I1014 15:06:35.385047   72173 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 46olqw.t0lfd7bmyz0olhbh \
	I1014 15:06:35.385192   72173 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 \
	I1014 15:06:35.385224   72173 kubeadm.go:310] 	--control-plane 
	I1014 15:06:35.385231   72173 kubeadm.go:310] 
	I1014 15:06:35.385322   72173 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1014 15:06:35.385334   72173 kubeadm.go:310] 
	I1014 15:06:35.385449   72173 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 46olqw.t0lfd7bmyz0olhbh \
	I1014 15:06:35.385588   72173 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 
	I1014 15:06:35.386604   72173 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 15:06:35.386674   72173 cni.go:84] Creating CNI manager for ""
	I1014 15:06:35.386689   72173 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:06:35.388617   72173 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 15:06:31.069009   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:33.565864   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:35.390017   72173 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 15:06:35.402242   72173 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 15:06:35.428958   72173 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 15:06:35.429016   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:35.429080   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-989166 minikube.k8s.io/updated_at=2024_10_14T15_06_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=embed-certs-989166 minikube.k8s.io/primary=true
	I1014 15:06:35.475775   72173 ops.go:34] apiserver oom_adj: -16
	I1014 15:06:35.645234   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:36.145613   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:36.646197   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:37.145401   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:37.645956   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:38.145978   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:38.645292   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:39.145444   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:39.646019   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:39.869659   72173 kubeadm.go:1113] duration metric: took 4.440701402s to wait for elevateKubeSystemPrivileges
	I1014 15:06:39.869695   72173 kubeadm.go:394] duration metric: took 5m1.76989803s to StartCluster
	I1014 15:06:39.869713   72173 settings.go:142] acquiring lock: {Name:mk9f6f6b9dc8c3435472077cbb9091b8e648d1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:06:39.869797   72173 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 15:06:39.872564   72173 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/kubeconfig: {Name:mk17dd47b52fd7a8ee17f563c35b22ef1b7788f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:06:39.872947   72173 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 15:06:39.873165   72173 config.go:182] Loaded profile config "embed-certs-989166": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 15:06:39.873085   72173 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 15:06:39.873246   72173 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-989166"
	I1014 15:06:39.873256   72173 addons.go:69] Setting metrics-server=true in profile "embed-certs-989166"
	I1014 15:06:39.873273   72173 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-989166"
	I1014 15:06:39.873272   72173 addons.go:69] Setting default-storageclass=true in profile "embed-certs-989166"
	I1014 15:06:39.873319   72173 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-989166"
	W1014 15:06:39.873282   72173 addons.go:243] addon storage-provisioner should already be in state true
	I1014 15:06:39.873417   72173 host.go:66] Checking if "embed-certs-989166" exists ...
	I1014 15:06:39.873282   72173 addons.go:234] Setting addon metrics-server=true in "embed-certs-989166"
	W1014 15:06:39.873476   72173 addons.go:243] addon metrics-server should already be in state true
	I1014 15:06:39.873504   72173 host.go:66] Checking if "embed-certs-989166" exists ...
	I1014 15:06:39.873875   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.873888   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.873920   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.873947   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.873986   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.874050   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.874921   72173 out.go:177] * Verifying Kubernetes components...
	I1014 15:06:39.876972   72173 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:06:39.893341   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41669
	I1014 15:06:39.893367   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41843
	I1014 15:06:39.893341   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39139
	I1014 15:06:39.893905   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.893915   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.894023   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.894471   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.894493   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.894651   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.894677   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.894713   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.894731   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.894942   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.895073   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.895563   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.895593   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.895778   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.895970   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetState
	I1014 15:06:39.896249   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.896293   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.899661   72173 addons.go:234] Setting addon default-storageclass=true in "embed-certs-989166"
	W1014 15:06:39.899685   72173 addons.go:243] addon default-storageclass should already be in state true
	I1014 15:06:39.899714   72173 host.go:66] Checking if "embed-certs-989166" exists ...
	I1014 15:06:39.900088   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.900131   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.912591   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39805
	I1014 15:06:39.913089   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.913630   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.913652   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.914099   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.914287   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetState
	I1014 15:06:39.914839   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39111
	I1014 15:06:39.915288   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.915783   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.915802   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.916147   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.916171   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:06:39.916382   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetState
	I1014 15:06:39.917766   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:06:39.917796   72173 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:06:39.919192   72173 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1014 15:06:35.567508   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:38.065792   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:40.066618   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:39.919297   72173 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 15:06:39.919320   72173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 15:06:39.919339   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:06:39.920468   72173 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1014 15:06:39.920489   72173 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1014 15:06:39.920507   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:06:39.921603   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45255
	I1014 15:06:39.921970   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.922502   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.922525   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.922994   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.923333   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:06:39.923585   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.923627   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.923826   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:06:39.923846   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:06:39.923876   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:06:39.924028   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:06:39.924157   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:06:39.924270   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:06:39.924291   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:06:39.924310   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:06:39.924397   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:06:39.924674   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:06:39.924840   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:06:39.925027   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:06:39.925201   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:06:39.945435   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40911
	I1014 15:06:39.945958   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.946468   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.946497   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.946855   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.947023   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetState
	I1014 15:06:39.948734   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:06:39.948924   72173 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 15:06:39.948942   72173 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 15:06:39.948966   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:06:39.951019   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:06:39.951418   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:06:39.951437   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:06:39.951570   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:06:39.951742   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:06:39.951918   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:06:39.952058   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:06:40.129893   72173 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:06:40.215427   72173 node_ready.go:35] waiting up to 6m0s for node "embed-certs-989166" to be "Ready" ...
	I1014 15:06:40.224710   72173 node_ready.go:49] node "embed-certs-989166" has status "Ready":"True"
	I1014 15:06:40.224731   72173 node_ready.go:38] duration metric: took 9.266994ms for node "embed-certs-989166" to be "Ready" ...
	I1014 15:06:40.224742   72173 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:06:40.230651   72173 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:40.394829   72173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 15:06:40.422573   72173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 15:06:40.430300   72173 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1014 15:06:40.430319   72173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1014 15:06:40.503826   72173 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1014 15:06:40.503857   72173 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1014 15:06:40.586087   72173 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 15:06:40.586116   72173 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1014 15:06:40.726605   72173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 15:06:40.887453   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:40.887475   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:40.887809   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Closing plugin on server side
	I1014 15:06:40.887857   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:40.887869   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:40.887886   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:40.887898   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:40.888127   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:40.888150   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:40.888160   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Closing plugin on server side
	I1014 15:06:40.901694   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:40.901717   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:40.902091   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:40.902103   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Closing plugin on server side
	I1014 15:06:40.902111   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:41.352636   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:41.352670   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:41.352963   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Closing plugin on server side
	I1014 15:06:41.353017   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:41.353029   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:41.353036   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:41.353043   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:41.353274   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:41.353302   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:41.578200   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:41.578219   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:41.578484   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:41.578529   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:41.578554   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:41.578588   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:41.578827   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:41.578844   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:41.578854   72173 addons.go:475] Verifying addon metrics-server=true in "embed-certs-989166"
	I1014 15:06:41.581312   72173 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1014 15:06:41.582506   72173 addons.go:510] duration metric: took 1.709432803s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1014 15:06:42.237265   72173 pod_ready.go:103] pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:44.240605   72173 pod_ready.go:103] pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:42.067701   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:44.566134   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:46.738094   72173 pod_ready.go:103] pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:48.739238   72173 pod_ready.go:103] pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:49.238145   72173 pod_ready.go:93] pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:49.238167   72173 pod_ready.go:82] duration metric: took 9.007493385s for pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.238176   72173 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-l95hj" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.243268   72173 pod_ready.go:93] pod "coredns-7c65d6cfc9-l95hj" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:49.243299   72173 pod_ready.go:82] duration metric: took 5.116183ms for pod "coredns-7c65d6cfc9-l95hj" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.243311   72173 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.247979   72173 pod_ready.go:93] pod "etcd-embed-certs-989166" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:49.248001   72173 pod_ready.go:82] duration metric: took 4.682826ms for pod "etcd-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.248009   72173 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.252590   72173 pod_ready.go:93] pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:49.252615   72173 pod_ready.go:82] duration metric: took 4.599399ms for pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.252624   72173 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.257541   72173 pod_ready.go:93] pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:49.257566   72173 pod_ready.go:82] duration metric: took 4.935116ms for pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.257575   72173 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g572s" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:47.064934   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:49.066284   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:49.635873   72173 pod_ready.go:93] pod "kube-proxy-g572s" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:49.635895   72173 pod_ready.go:82] duration metric: took 378.313947ms for pod "kube-proxy-g572s" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.635904   72173 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:50.035141   72173 pod_ready.go:93] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:50.035169   72173 pod_ready.go:82] duration metric: took 399.257073ms for pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:50.035179   72173 pod_ready.go:39] duration metric: took 9.810424567s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:06:50.035195   72173 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:06:50.035258   72173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:50.054964   72173 api_server.go:72] duration metric: took 10.181978114s to wait for apiserver process to appear ...
	I1014 15:06:50.054996   72173 api_server.go:88] waiting for apiserver healthz status ...
	I1014 15:06:50.055020   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:06:50.061606   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 200:
	ok
	I1014 15:06:50.063380   72173 api_server.go:141] control plane version: v1.31.1
	I1014 15:06:50.063411   72173 api_server.go:131] duration metric: took 8.40661ms to wait for apiserver health ...
	I1014 15:06:50.063421   72173 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 15:06:50.239258   72173 system_pods.go:59] 9 kube-system pods found
	I1014 15:06:50.239286   72173 system_pods.go:61] "coredns-7c65d6cfc9-6bmwg" [7cf9ad75-b75b-4cce-aad8-d68a810a5d0a] Running
	I1014 15:06:50.239292   72173 system_pods.go:61] "coredns-7c65d6cfc9-l95hj" [6563de05-ef49-4fa9-bf0b-a826fbc8bb14] Running
	I1014 15:06:50.239295   72173 system_pods.go:61] "etcd-embed-certs-989166" [8d29b784-a336-4cb9-ac24-3e9e129e4f49] Running
	I1014 15:06:50.239299   72173 system_pods.go:61] "kube-apiserver-embed-certs-989166" [a98c0a3d-0fd7-4f02-8d61-93f8cada740e] Running
	I1014 15:06:50.239303   72173 system_pods.go:61] "kube-controller-manager-embed-certs-989166" [e3146331-cd59-4a34-8ca8-c9637acdb687] Running
	I1014 15:06:50.239305   72173 system_pods.go:61] "kube-proxy-g572s" [5d2e4a08-5d05-48ab-8fbe-3bb3fe2f77ab] Running
	I1014 15:06:50.239308   72173 system_pods.go:61] "kube-scheduler-embed-certs-989166" [fd61dc8f-51aa-43ce-8e6b-8be0c50073fc] Running
	I1014 15:06:50.239314   72173 system_pods.go:61] "metrics-server-6867b74b74-jl6pp" [c244e53d-c492-426a-be7f-d405f2defd17] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:06:50.239317   72173 system_pods.go:61] "storage-provisioner" [ad6caa59-bc75-4e8f-8052-86d963b92fe3] Running
	I1014 15:06:50.239325   72173 system_pods.go:74] duration metric: took 175.89649ms to wait for pod list to return data ...
	I1014 15:06:50.239334   72173 default_sa.go:34] waiting for default service account to be created ...
	I1014 15:06:50.435980   72173 default_sa.go:45] found service account: "default"
	I1014 15:06:50.436007   72173 default_sa.go:55] duration metric: took 196.667838ms for default service account to be created ...
	I1014 15:06:50.436017   72173 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 15:06:50.639185   72173 system_pods.go:86] 9 kube-system pods found
	I1014 15:06:50.639224   72173 system_pods.go:89] "coredns-7c65d6cfc9-6bmwg" [7cf9ad75-b75b-4cce-aad8-d68a810a5d0a] Running
	I1014 15:06:50.639234   72173 system_pods.go:89] "coredns-7c65d6cfc9-l95hj" [6563de05-ef49-4fa9-bf0b-a826fbc8bb14] Running
	I1014 15:06:50.639241   72173 system_pods.go:89] "etcd-embed-certs-989166" [8d29b784-a336-4cb9-ac24-3e9e129e4f49] Running
	I1014 15:06:50.639248   72173 system_pods.go:89] "kube-apiserver-embed-certs-989166" [a98c0a3d-0fd7-4f02-8d61-93f8cada740e] Running
	I1014 15:06:50.639254   72173 system_pods.go:89] "kube-controller-manager-embed-certs-989166" [e3146331-cd59-4a34-8ca8-c9637acdb687] Running
	I1014 15:06:50.639262   72173 system_pods.go:89] "kube-proxy-g572s" [5d2e4a08-5d05-48ab-8fbe-3bb3fe2f77ab] Running
	I1014 15:06:50.639269   72173 system_pods.go:89] "kube-scheduler-embed-certs-989166" [fd61dc8f-51aa-43ce-8e6b-8be0c50073fc] Running
	I1014 15:06:50.639283   72173 system_pods.go:89] "metrics-server-6867b74b74-jl6pp" [c244e53d-c492-426a-be7f-d405f2defd17] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:06:50.639295   72173 system_pods.go:89] "storage-provisioner" [ad6caa59-bc75-4e8f-8052-86d963b92fe3] Running
	I1014 15:06:50.639309   72173 system_pods.go:126] duration metric: took 203.286322ms to wait for k8s-apps to be running ...
	I1014 15:06:50.639327   72173 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 15:06:50.639388   72173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:06:50.655377   72173 system_svc.go:56] duration metric: took 16.0447ms WaitForService to wait for kubelet
	I1014 15:06:50.655402   72173 kubeadm.go:582] duration metric: took 10.782421893s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 15:06:50.655425   72173 node_conditions.go:102] verifying NodePressure condition ...
	I1014 15:06:50.835507   72173 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 15:06:50.835543   72173 node_conditions.go:123] node cpu capacity is 2
	I1014 15:06:50.835556   72173 node_conditions.go:105] duration metric: took 180.126755ms to run NodePressure ...
	I1014 15:06:50.835570   72173 start.go:241] waiting for startup goroutines ...
	I1014 15:06:50.835580   72173 start.go:246] waiting for cluster config update ...
	I1014 15:06:50.835594   72173 start.go:255] writing updated cluster config ...
	I1014 15:06:50.835924   72173 ssh_runner.go:195] Run: rm -f paused
	I1014 15:06:50.883737   72173 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1014 15:06:50.886200   72173 out.go:177] * Done! kubectl is now configured to use "embed-certs-989166" cluster and "default" namespace by default
	I1014 15:06:51.066344   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:53.566466   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:56.066734   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:58.567007   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:01.066112   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:03.068758   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:05.566174   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:07.566274   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:09.566829   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:10.525694   72639 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1014 15:07:10.526665   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:07:10.526908   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:07:12.066402   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:13.560638   71679 pod_ready.go:82] duration metric: took 4m0.000980901s for pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace to be "Ready" ...
	E1014 15:07:13.560669   71679 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace to be "Ready" (will not retry!)
	I1014 15:07:13.560693   71679 pod_ready.go:39] duration metric: took 4m13.04495779s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:07:13.560725   71679 kubeadm.go:597] duration metric: took 4m21.006404411s to restartPrimaryControlPlane
	W1014 15:07:13.560791   71679 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1014 15:07:13.560823   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 15:07:15.527128   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:07:15.527376   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:07:25.527779   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:07:25.528060   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:07:39.775370   71679 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.214519412s)
	I1014 15:07:39.775448   71679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:07:39.790736   71679 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:07:39.800575   71679 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:07:39.810380   71679 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:07:39.810402   71679 kubeadm.go:157] found existing configuration files:
	
	I1014 15:07:39.810462   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:07:39.819880   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:07:39.819938   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:07:39.830542   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:07:39.840268   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:07:39.840318   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:07:39.849727   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:07:39.858513   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:07:39.858651   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:07:39.869154   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:07:39.878724   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:07:39.878798   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:07:39.888123   71679 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 15:07:39.942676   71679 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1014 15:07:39.942771   71679 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 15:07:40.060558   71679 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 15:07:40.060698   71679 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 15:07:40.060861   71679 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 15:07:40.076085   71679 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 15:07:40.078200   71679 out.go:235]   - Generating certificates and keys ...
	I1014 15:07:40.078301   71679 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 15:07:40.078381   71679 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 15:07:40.078505   71679 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 15:07:40.078620   71679 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1014 15:07:40.078717   71679 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 15:07:40.078794   71679 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1014 15:07:40.078887   71679 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1014 15:07:40.078973   71679 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1014 15:07:40.079069   71679 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 15:07:40.079161   71679 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 15:07:40.079234   71679 kubeadm.go:310] [certs] Using the existing "sa" key
	I1014 15:07:40.079315   71679 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 15:07:40.177082   71679 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 15:07:40.264965   71679 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 15:07:40.415660   71679 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 15:07:40.556759   71679 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 15:07:40.727152   71679 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 15:07:40.727573   71679 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 15:07:40.730409   71679 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 15:07:40.732204   71679 out.go:235]   - Booting up control plane ...
	I1014 15:07:40.732328   71679 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 15:07:40.732440   71679 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 15:07:40.732529   71679 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 15:07:40.751839   71679 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 15:07:40.758034   71679 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 15:07:40.758095   71679 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 15:07:40.895135   71679 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 15:07:40.895254   71679 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 15:07:41.397066   71679 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.194797ms
	I1014 15:07:41.397209   71679 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1014 15:07:46.401247   71679 kubeadm.go:310] [api-check] The API server is healthy after 5.002197966s
	I1014 15:07:46.419134   71679 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 15:07:46.433128   71679 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 15:07:46.477079   71679 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 15:07:46.477289   71679 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-813300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 15:07:46.492703   71679 kubeadm.go:310] [bootstrap-token] Using token: 1vsv04.mf3pqj2ow157sq8h
	I1014 15:07:46.494314   71679 out.go:235]   - Configuring RBAC rules ...
	I1014 15:07:46.494467   71679 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 15:07:46.501090   71679 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 15:07:46.515987   71679 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 15:07:46.522417   71679 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 15:07:46.528612   71679 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 15:07:46.536975   71679 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 15:07:46.810642   71679 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 15:07:47.240531   71679 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1014 15:07:47.810279   71679 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1014 15:07:47.811169   71679 kubeadm.go:310] 
	I1014 15:07:47.811230   71679 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1014 15:07:47.811238   71679 kubeadm.go:310] 
	I1014 15:07:47.811307   71679 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1014 15:07:47.811312   71679 kubeadm.go:310] 
	I1014 15:07:47.811335   71679 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1014 15:07:47.811388   71679 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 15:07:47.811440   71679 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 15:07:47.811447   71679 kubeadm.go:310] 
	I1014 15:07:47.811501   71679 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1014 15:07:47.811507   71679 kubeadm.go:310] 
	I1014 15:07:47.811546   71679 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 15:07:47.811553   71679 kubeadm.go:310] 
	I1014 15:07:47.811600   71679 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1014 15:07:47.811667   71679 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 15:07:47.811755   71679 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 15:07:47.811771   71679 kubeadm.go:310] 
	I1014 15:07:47.811844   71679 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 15:07:47.811912   71679 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1014 15:07:47.811921   71679 kubeadm.go:310] 
	I1014 15:07:47.811999   71679 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1vsv04.mf3pqj2ow157sq8h \
	I1014 15:07:47.812091   71679 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 \
	I1014 15:07:47.812139   71679 kubeadm.go:310] 	--control-plane 
	I1014 15:07:47.812153   71679 kubeadm.go:310] 
	I1014 15:07:47.812231   71679 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1014 15:07:47.812238   71679 kubeadm.go:310] 
	I1014 15:07:47.812306   71679 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1vsv04.mf3pqj2ow157sq8h \
	I1014 15:07:47.812393   71679 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 
	I1014 15:07:47.814071   71679 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 15:07:47.814103   71679 cni.go:84] Creating CNI manager for ""
	I1014 15:07:47.814113   71679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:07:47.816033   71679 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 15:07:45.528527   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:07:45.528768   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:07:47.817325   71679 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 15:07:47.829639   71679 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 15:07:47.847797   71679 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 15:07:47.847857   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:47.847929   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-813300 minikube.k8s.io/updated_at=2024_10_14T15_07_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=no-preload-813300 minikube.k8s.io/primary=true
	I1014 15:07:48.039959   71679 ops.go:34] apiserver oom_adj: -16
	I1014 15:07:48.040095   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:48.540295   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:49.040911   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:49.540233   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:50.040146   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:50.540494   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:51.041033   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:51.540516   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:52.040935   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:52.146854   71679 kubeadm.go:1113] duration metric: took 4.299055033s to wait for elevateKubeSystemPrivileges
	I1014 15:07:52.146890   71679 kubeadm.go:394] duration metric: took 4m59.642546726s to StartCluster
	I1014 15:07:52.146906   71679 settings.go:142] acquiring lock: {Name:mk9f6f6b9dc8c3435472077cbb9091b8e648d1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:07:52.146987   71679 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 15:07:52.148782   71679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/kubeconfig: {Name:mk17dd47b52fd7a8ee17f563c35b22ef1b7788f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:07:52.149067   71679 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.13 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 15:07:52.149168   71679 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 15:07:52.149303   71679 addons.go:69] Setting storage-provisioner=true in profile "no-preload-813300"
	I1014 15:07:52.149333   71679 addons.go:234] Setting addon storage-provisioner=true in "no-preload-813300"
	I1014 15:07:52.149342   71679 config.go:182] Loaded profile config "no-preload-813300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	W1014 15:07:52.149355   71679 addons.go:243] addon storage-provisioner should already be in state true
	I1014 15:07:52.149378   71679 addons.go:69] Setting default-storageclass=true in profile "no-preload-813300"
	I1014 15:07:52.149390   71679 host.go:66] Checking if "no-preload-813300" exists ...
	I1014 15:07:52.149412   71679 addons.go:69] Setting metrics-server=true in profile "no-preload-813300"
	I1014 15:07:52.149447   71679 addons.go:234] Setting addon metrics-server=true in "no-preload-813300"
	W1014 15:07:52.149461   71679 addons.go:243] addon metrics-server should already be in state true
	I1014 15:07:52.149494   71679 host.go:66] Checking if "no-preload-813300" exists ...
	I1014 15:07:52.149421   71679 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-813300"
	I1014 15:07:52.149748   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.149789   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.149861   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.149890   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.149905   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.149928   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.150482   71679 out.go:177] * Verifying Kubernetes components...
	I1014 15:07:52.152252   71679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:07:52.167205   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38341
	I1014 15:07:52.170723   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45457
	I1014 15:07:52.170742   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.170728   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39829
	I1014 15:07:52.171111   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.171302   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.171321   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.171386   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.171678   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.171702   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.171717   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.171900   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.171916   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.172164   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.172243   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.172279   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.172325   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.172386   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetState
	I1014 15:07:52.172868   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.172916   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.175482   71679 addons.go:234] Setting addon default-storageclass=true in "no-preload-813300"
	W1014 15:07:52.175502   71679 addons.go:243] addon default-storageclass should already be in state true
	I1014 15:07:52.175529   71679 host.go:66] Checking if "no-preload-813300" exists ...
	I1014 15:07:52.175763   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.175792   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.190835   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46633
	I1014 15:07:52.191422   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.191767   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39613
	I1014 15:07:52.191901   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35293
	I1014 15:07:52.192010   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.192027   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.192317   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.192436   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.192481   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.192988   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.193010   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.192992   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.193060   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.193474   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.193524   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.193530   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.193563   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.193729   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetState
	I1014 15:07:52.193770   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetState
	I1014 15:07:52.195702   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:07:52.195770   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:07:52.197642   71679 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1014 15:07:52.197652   71679 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:07:52.198957   71679 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1014 15:07:52.198978   71679 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1014 15:07:52.198998   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:07:52.199075   71679 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 15:07:52.199096   71679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 15:07:52.199111   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:07:52.202637   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:07:52.203064   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:07:52.203088   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:07:52.203245   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:07:52.203515   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:07:52.203519   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:07:52.203663   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:07:52.203812   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:07:52.203878   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:07:52.203903   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:07:52.204187   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:07:52.204377   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:07:52.204535   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:07:52.204683   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:07:52.231332   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38395
	I1014 15:07:52.231813   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.232320   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.232344   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.232645   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.232836   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetState
	I1014 15:07:52.234309   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:07:52.234570   71679 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 15:07:52.234585   71679 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 15:07:52.234622   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:07:52.237749   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:07:52.238364   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:07:52.238393   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:07:52.238562   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:07:52.238744   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:07:52.238903   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:07:52.239031   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:07:52.375830   71679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:07:52.401606   71679 node_ready.go:35] waiting up to 6m0s for node "no-preload-813300" to be "Ready" ...
	I1014 15:07:52.431363   71679 node_ready.go:49] node "no-preload-813300" has status "Ready":"True"
	I1014 15:07:52.431393   71679 node_ready.go:38] duration metric: took 29.758277ms for node "no-preload-813300" to be "Ready" ...
	I1014 15:07:52.431405   71679 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:07:52.446747   71679 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fjzn8" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:52.501642   71679 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1014 15:07:52.501664   71679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1014 15:07:52.509733   71679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 15:07:52.515833   71679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 15:07:52.536485   71679 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1014 15:07:52.536508   71679 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1014 15:07:52.622269   71679 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 15:07:52.622299   71679 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1014 15:07:52.702873   71679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 15:07:52.909827   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:52.909865   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:52.910194   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:52.910209   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:52.910235   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:52.910249   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:52.910510   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:52.910525   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:52.918161   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:52.918182   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:52.918473   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:52.918493   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:52.918480   71679 main.go:141] libmachine: (no-preload-813300) DBG | Closing plugin on server side
	I1014 15:07:53.707659   71679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.191781585s)
	I1014 15:07:53.707706   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:53.707719   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:53.708011   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:53.708035   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:53.708052   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:53.708062   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:53.708330   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:53.708346   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:54.060665   71679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.357747934s)
	I1014 15:07:54.060752   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:54.060770   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:54.061069   71679 main.go:141] libmachine: (no-preload-813300) DBG | Closing plugin on server side
	I1014 15:07:54.061153   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:54.061164   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:54.061173   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:54.061184   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:54.062712   71679 main.go:141] libmachine: (no-preload-813300) DBG | Closing plugin on server side
	I1014 15:07:54.062787   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:54.062797   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:54.062811   71679 addons.go:475] Verifying addon metrics-server=true in "no-preload-813300"
	I1014 15:07:54.064762   71679 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1014 15:07:54.066623   71679 addons.go:510] duration metric: took 1.917465271s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1014 15:07:54.454216   71679 pod_ready.go:103] pod "coredns-7c65d6cfc9-fjzn8" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:56.455649   71679 pod_ready.go:93] pod "coredns-7c65d6cfc9-fjzn8" in "kube-system" namespace has status "Ready":"True"
	I1014 15:07:56.455674   71679 pod_ready.go:82] duration metric: took 4.00889709s for pod "coredns-7c65d6cfc9-fjzn8" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:56.455689   71679 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nvpvl" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:58.461687   71679 pod_ready.go:103] pod "coredns-7c65d6cfc9-nvpvl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:59.962360   71679 pod_ready.go:93] pod "coredns-7c65d6cfc9-nvpvl" in "kube-system" namespace has status "Ready":"True"
	I1014 15:07:59.962382   71679 pod_ready.go:82] duration metric: took 3.506686516s for pod "coredns-7c65d6cfc9-nvpvl" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.962391   71679 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.969241   71679 pod_ready.go:93] pod "etcd-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:07:59.969261   71679 pod_ready.go:82] duration metric: took 6.864356ms for pod "etcd-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.969270   71679 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.974810   71679 pod_ready.go:93] pod "kube-apiserver-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:07:59.974828   71679 pod_ready.go:82] duration metric: took 5.552122ms for pod "kube-apiserver-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.974837   71679 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.979555   71679 pod_ready.go:93] pod "kube-controller-manager-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:07:59.979580   71679 pod_ready.go:82] duration metric: took 4.735265ms for pod "kube-controller-manager-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.979592   71679 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-54rrd" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.985111   71679 pod_ready.go:93] pod "kube-proxy-54rrd" in "kube-system" namespace has status "Ready":"True"
	I1014 15:07:59.985138   71679 pod_ready.go:82] duration metric: took 5.538126ms for pod "kube-proxy-54rrd" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.985150   71679 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:08:00.359524   71679 pod_ready.go:93] pod "kube-scheduler-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:08:00.359548   71679 pod_ready.go:82] duration metric: took 374.389838ms for pod "kube-scheduler-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:08:00.359558   71679 pod_ready.go:39] duration metric: took 7.928141116s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:08:00.359575   71679 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:08:00.359626   71679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:08:00.376115   71679 api_server.go:72] duration metric: took 8.22700683s to wait for apiserver process to appear ...
	I1014 15:08:00.376144   71679 api_server.go:88] waiting for apiserver healthz status ...
	I1014 15:08:00.376169   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:08:00.381225   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 200:
	ok
	I1014 15:08:00.382348   71679 api_server.go:141] control plane version: v1.31.1
	I1014 15:08:00.382377   71679 api_server.go:131] duration metric: took 6.225832ms to wait for apiserver health ...
	I1014 15:08:00.382386   71679 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 15:08:00.563350   71679 system_pods.go:59] 9 kube-system pods found
	I1014 15:08:00.563382   71679 system_pods.go:61] "coredns-7c65d6cfc9-fjzn8" [7850936e-8104-4e8f-a4cc-948579963790] Running
	I1014 15:08:00.563386   71679 system_pods.go:61] "coredns-7c65d6cfc9-nvpvl" [d926987d-9c61-4bf6-83e3-97334715e1d5] Running
	I1014 15:08:00.563390   71679 system_pods.go:61] "etcd-no-preload-813300" [e5895ac5-7829-4d8c-b5be-d621dbba78bd] Running
	I1014 15:08:00.563394   71679 system_pods.go:61] "kube-apiserver-no-preload-813300" [a30389db-98c0-49e3-8a9b-f3414e62c09a] Running
	I1014 15:08:00.563399   71679 system_pods.go:61] "kube-controller-manager-no-preload-813300" [f710bd35-f215-4aa1-96a9-fb5be44d04cc] Running
	I1014 15:08:00.563402   71679 system_pods.go:61] "kube-proxy-54rrd" [0c8ab0de-c204-46f5-a725-5dcd9eff59d8] Running
	I1014 15:08:00.563405   71679 system_pods.go:61] "kube-scheduler-no-preload-813300" [5386153a-f569-4332-b448-2a000f7a16bb] Running
	I1014 15:08:00.563412   71679 system_pods.go:61] "metrics-server-6867b74b74-8vfll" [cf3594da-9896-49ed-b47f-5bbea36c9aaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:08:00.563416   71679 system_pods.go:61] "storage-provisioner" [2d79bfdf-bda5-42bf-8ddf-73d7df4855db] Running
	I1014 15:08:00.563424   71679 system_pods.go:74] duration metric: took 181.032852ms to wait for pod list to return data ...
	I1014 15:08:00.563436   71679 default_sa.go:34] waiting for default service account to be created ...
	I1014 15:08:00.760054   71679 default_sa.go:45] found service account: "default"
	I1014 15:08:00.760084   71679 default_sa.go:55] duration metric: took 196.637678ms for default service account to be created ...
	I1014 15:08:00.760095   71679 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 15:08:00.962545   71679 system_pods.go:86] 9 kube-system pods found
	I1014 15:08:00.962577   71679 system_pods.go:89] "coredns-7c65d6cfc9-fjzn8" [7850936e-8104-4e8f-a4cc-948579963790] Running
	I1014 15:08:00.962583   71679 system_pods.go:89] "coredns-7c65d6cfc9-nvpvl" [d926987d-9c61-4bf6-83e3-97334715e1d5] Running
	I1014 15:08:00.962587   71679 system_pods.go:89] "etcd-no-preload-813300" [e5895ac5-7829-4d8c-b5be-d621dbba78bd] Running
	I1014 15:08:00.962591   71679 system_pods.go:89] "kube-apiserver-no-preload-813300" [a30389db-98c0-49e3-8a9b-f3414e62c09a] Running
	I1014 15:08:00.962605   71679 system_pods.go:89] "kube-controller-manager-no-preload-813300" [f710bd35-f215-4aa1-96a9-fb5be44d04cc] Running
	I1014 15:08:00.962609   71679 system_pods.go:89] "kube-proxy-54rrd" [0c8ab0de-c204-46f5-a725-5dcd9eff59d8] Running
	I1014 15:08:00.962613   71679 system_pods.go:89] "kube-scheduler-no-preload-813300" [5386153a-f569-4332-b448-2a000f7a16bb] Running
	I1014 15:08:00.962619   71679 system_pods.go:89] "metrics-server-6867b74b74-8vfll" [cf3594da-9896-49ed-b47f-5bbea36c9aaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:08:00.962623   71679 system_pods.go:89] "storage-provisioner" [2d79bfdf-bda5-42bf-8ddf-73d7df4855db] Running
	I1014 15:08:00.962633   71679 system_pods.go:126] duration metric: took 202.532202ms to wait for k8s-apps to be running ...
	I1014 15:08:00.962640   71679 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 15:08:00.962682   71679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:08:00.980272   71679 system_svc.go:56] duration metric: took 17.624381ms WaitForService to wait for kubelet
	I1014 15:08:00.980310   71679 kubeadm.go:582] duration metric: took 8.831207019s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 15:08:00.980333   71679 node_conditions.go:102] verifying NodePressure condition ...
	I1014 15:08:01.160914   71679 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 15:08:01.160947   71679 node_conditions.go:123] node cpu capacity is 2
	I1014 15:08:01.160961   71679 node_conditions.go:105] duration metric: took 180.622279ms to run NodePressure ...
	I1014 15:08:01.160976   71679 start.go:241] waiting for startup goroutines ...
	I1014 15:08:01.160985   71679 start.go:246] waiting for cluster config update ...
	I1014 15:08:01.161000   71679 start.go:255] writing updated cluster config ...
	I1014 15:08:01.161357   71679 ssh_runner.go:195] Run: rm -f paused
	I1014 15:08:01.212486   71679 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1014 15:08:01.215083   71679 out.go:177] * Done! kubectl is now configured to use "no-preload-813300" cluster and "default" namespace by default
	I1014 15:08:25.530669   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:08:25.530970   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:08:25.530998   72639 kubeadm.go:310] 
	I1014 15:08:25.531059   72639 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1014 15:08:25.531114   72639 kubeadm.go:310] 		timed out waiting for the condition
	I1014 15:08:25.531125   72639 kubeadm.go:310] 
	I1014 15:08:25.531177   72639 kubeadm.go:310] 	This error is likely caused by:
	I1014 15:08:25.531238   72639 kubeadm.go:310] 		- The kubelet is not running
	I1014 15:08:25.531381   72639 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1014 15:08:25.531392   72639 kubeadm.go:310] 
	I1014 15:08:25.531527   72639 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1014 15:08:25.531587   72639 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1014 15:08:25.531633   72639 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1014 15:08:25.531643   72639 kubeadm.go:310] 
	I1014 15:08:25.531766   72639 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1014 15:08:25.531872   72639 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 15:08:25.531891   72639 kubeadm.go:310] 
	I1014 15:08:25.532038   72639 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1014 15:08:25.532174   72639 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 15:08:25.532281   72639 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1014 15:08:25.532377   72639 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1014 15:08:25.532418   72639 kubeadm.go:310] 
	I1014 15:08:25.532543   72639 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 15:08:25.532640   72639 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1014 15:08:25.532742   72639 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1014 15:08:25.532833   72639 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1014 15:08:25.532870   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 15:08:31.003635   72639 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.470741012s)
	I1014 15:08:31.003724   72639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:08:31.018666   72639 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:08:31.029707   72639 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:08:31.029729   72639 kubeadm.go:157] found existing configuration files:
	
	I1014 15:08:31.029776   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:08:31.039554   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:08:31.039625   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:08:31.049748   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:08:31.059618   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:08:31.059682   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:08:31.069369   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:08:31.078321   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:08:31.078385   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:08:31.088006   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:08:31.096681   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:08:31.096742   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:08:31.106269   72639 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 15:08:31.182768   72639 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1014 15:08:31.182833   72639 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 15:08:31.341660   72639 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 15:08:31.341833   72639 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 15:08:31.342008   72639 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1014 15:08:31.538731   72639 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 15:08:31.540933   72639 out.go:235]   - Generating certificates and keys ...
	I1014 15:08:31.541037   72639 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 15:08:31.541124   72639 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 15:08:31.541270   72639 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 15:08:31.541386   72639 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1014 15:08:31.541486   72639 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 15:08:31.541559   72639 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1014 15:08:31.541663   72639 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1014 15:08:31.541750   72639 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1014 15:08:31.542000   72639 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 15:08:31.542534   72639 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 15:08:31.542627   72639 kubeadm.go:310] [certs] Using the existing "sa" key
	I1014 15:08:31.542711   72639 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 15:08:31.847005   72639 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 15:08:32.049586   72639 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 15:08:32.355652   72639 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 15:08:32.511031   72639 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 15:08:32.526310   72639 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 15:08:32.526755   72639 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 15:08:32.526841   72639 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 15:08:32.665898   72639 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 15:08:32.667688   72639 out.go:235]   - Booting up control plane ...
	I1014 15:08:32.667806   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 15:08:32.681232   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 15:08:32.682929   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 15:08:32.683704   72639 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 15:08:32.685936   72639 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1014 15:09:12.687998   72639 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1014 15:09:12.688248   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:09:12.688517   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:09:17.689026   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:09:17.689213   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:09:27.689821   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:09:27.690119   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:09:47.690936   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:09:47.691185   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:10:27.691438   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:10:27.691721   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:10:27.691744   72639 kubeadm.go:310] 
	I1014 15:10:27.691779   72639 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1014 15:10:27.691847   72639 kubeadm.go:310] 		timed out waiting for the condition
	I1014 15:10:27.691867   72639 kubeadm.go:310] 
	I1014 15:10:27.691907   72639 kubeadm.go:310] 	This error is likely caused by:
	I1014 15:10:27.691972   72639 kubeadm.go:310] 		- The kubelet is not running
	I1014 15:10:27.692124   72639 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1014 15:10:27.692136   72639 kubeadm.go:310] 
	I1014 15:10:27.692253   72639 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1014 15:10:27.692311   72639 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1014 15:10:27.692352   72639 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1014 15:10:27.692363   72639 kubeadm.go:310] 
	I1014 15:10:27.692497   72639 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1014 15:10:27.692617   72639 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 15:10:27.692633   72639 kubeadm.go:310] 
	I1014 15:10:27.692787   72639 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1014 15:10:27.692915   72639 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 15:10:27.693051   72639 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1014 15:10:27.693146   72639 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1014 15:10:27.693158   72639 kubeadm.go:310] 
	I1014 15:10:27.693497   72639 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 15:10:27.693627   72639 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1014 15:10:27.693710   72639 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1014 15:10:27.693770   72639 kubeadm.go:394] duration metric: took 8m7.905137486s to StartCluster
	I1014 15:10:27.693808   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:10:27.693863   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:10:27.735373   72639 cri.go:89] found id: ""
	I1014 15:10:27.735410   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.735419   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:10:27.735425   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:10:27.735484   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:10:27.775691   72639 cri.go:89] found id: ""
	I1014 15:10:27.775713   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.775721   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:10:27.775727   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:10:27.775778   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:10:27.811621   72639 cri.go:89] found id: ""
	I1014 15:10:27.811645   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.811653   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:10:27.811658   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:10:27.811718   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:10:27.850894   72639 cri.go:89] found id: ""
	I1014 15:10:27.850917   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.850925   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:10:27.850931   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:10:27.850979   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:10:27.891559   72639 cri.go:89] found id: ""
	I1014 15:10:27.891596   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.891608   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:10:27.891616   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:10:27.891671   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:10:27.929896   72639 cri.go:89] found id: ""
	I1014 15:10:27.929929   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.929942   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:10:27.930002   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:10:27.930096   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:10:27.964801   72639 cri.go:89] found id: ""
	I1014 15:10:27.964828   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.964839   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:10:27.964845   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:10:27.964905   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:10:28.011737   72639 cri.go:89] found id: ""
	I1014 15:10:28.011761   72639 logs.go:282] 0 containers: []
	W1014 15:10:28.011769   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:10:28.011777   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:10:28.011788   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:10:28.088053   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:10:28.088082   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:10:28.088098   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:10:28.214495   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:10:28.214531   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:10:28.254766   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:10:28.254796   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:10:28.304942   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:10:28.304977   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1014 15:10:28.319674   72639 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1014 15:10:28.319729   72639 out.go:270] * 
	W1014 15:10:28.319783   72639 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 15:10:28.319802   72639 out.go:270] * 
	W1014 15:10:28.320716   72639 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 15:10:28.324551   72639 out.go:201] 
	W1014 15:10:28.325905   72639 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 15:10:28.325940   72639 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1014 15:10:28.325985   72639 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1014 15:10:28.327473   72639 out.go:201] 
	
	
	==> CRI-O <==
	Oct 14 15:15:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:15:35.772569707Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918935772548294,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f7945d28-754d-4722-ad58-3a448e2e7757 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:15:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:15:35.772980527Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ba530ac1-ea67-4153-a5b7-917cd5f07750 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:15:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:15:35.773057588Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ba530ac1-ea67-4153-a5b7-917cd5f07750 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:15:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:15:35.773310073Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81,PodSandboxId:0590d28e358c8bc722e9016e9814871f9cf67cef6809256acd1cc1c1e2b232a6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728918155199929716,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62925b5e-ec1d-4d5b-aa70-a4fc555db52d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1d58f06c02f6e31d834478886bd991508a4c2d9ad0258aa93225671f6be6f38,PodSandboxId:dfcbb62af0cc631a713d54cf52d9adb4854a7c54c6f6ccabdb5f541e2ac16c06,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728918142397616012,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 73313975-3d02-4629-9437-ec78b344b297,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1,PodSandboxId:106c488f9ab21922d4afc6f3b4b3bbcb764633957969a4df9c459a1bc760a32e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728918140174257069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-994hx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0291ce4-5503-4bb1-8e36-d956b115c3ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42,PodSandboxId:22a3d648f9dff8c686309f7ad847156012da9c4532a6b36eb70f1ba51aa68ccd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728918124372675326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rh82t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dcd3c39-1
bfe-40ac-a012-ea17ea1dfb6d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076,PodSandboxId:0590d28e358c8bc722e9016e9814871f9cf67cef6809256acd1cc1c1e2b232a6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728918124367753623,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62925b5e-ec1d-4d5b-aa70
-a4fc555db52d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa,PodSandboxId:4cbb5db056a6dba0383ea5131f1101858340f24129fc12defb065e22b55f928d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728918120665314822,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-201291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 372e03020d4971676e1f7
f514f4974ea,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69,PodSandboxId:d7742a4d0ed600db11f7c8793ea86ae3867317ab9d22681470466204d33be567,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728918120678989754,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-201291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be9d4fbf7ec17f9514254bcca1b63f7d,},Annotations:map[st
ring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4,PodSandboxId:0a660b7b688faaa376e5b891709378c5b72c2a909aca0846087ac335c41d32e0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728918120647059319,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-201291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ea78d3f249f4ed9fd101799a78d
3e57,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f,PodSandboxId:9cf4262d69c300e4fd67e0da5d27d90a583fed442f31b9849ac60444feb6eccd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728918120661993982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-201291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba36f4d10fff5c44627000ddc1e694
71,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ba530ac1-ea67-4153-a5b7-917cd5f07750 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:15:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:15:35.808053380Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8fb6c3fb-6750-4a55-9f50-c3bb1f5cb016 name=/runtime.v1.RuntimeService/Version
	Oct 14 15:15:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:15:35.808234295Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8fb6c3fb-6750-4a55-9f50-c3bb1f5cb016 name=/runtime.v1.RuntimeService/Version
	Oct 14 15:15:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:15:35.809791862Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b10941a4-4ef2-4b86-a64d-3376e0535da6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:15:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:15:35.810788365Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918935810668967,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b10941a4-4ef2-4b86-a64d-3376e0535da6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:15:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:15:35.811892366Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0143c3ba-dec0-4ce8-b546-f4876f4ed803 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:15:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:15:35.811980423Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0143c3ba-dec0-4ce8-b546-f4876f4ed803 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:15:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:15:35.812242595Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81,PodSandboxId:0590d28e358c8bc722e9016e9814871f9cf67cef6809256acd1cc1c1e2b232a6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728918155199929716,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62925b5e-ec1d-4d5b-aa70-a4fc555db52d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1d58f06c02f6e31d834478886bd991508a4c2d9ad0258aa93225671f6be6f38,PodSandboxId:dfcbb62af0cc631a713d54cf52d9adb4854a7c54c6f6ccabdb5f541e2ac16c06,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728918142397616012,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 73313975-3d02-4629-9437-ec78b344b297,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1,PodSandboxId:106c488f9ab21922d4afc6f3b4b3bbcb764633957969a4df9c459a1bc760a32e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728918140174257069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-994hx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0291ce4-5503-4bb1-8e36-d956b115c3ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42,PodSandboxId:22a3d648f9dff8c686309f7ad847156012da9c4532a6b36eb70f1ba51aa68ccd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728918124372675326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rh82t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dcd3c39-1
bfe-40ac-a012-ea17ea1dfb6d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076,PodSandboxId:0590d28e358c8bc722e9016e9814871f9cf67cef6809256acd1cc1c1e2b232a6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728918124367753623,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62925b5e-ec1d-4d5b-aa70
-a4fc555db52d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa,PodSandboxId:4cbb5db056a6dba0383ea5131f1101858340f24129fc12defb065e22b55f928d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728918120665314822,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-201291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 372e03020d4971676e1f7
f514f4974ea,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69,PodSandboxId:d7742a4d0ed600db11f7c8793ea86ae3867317ab9d22681470466204d33be567,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728918120678989754,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-201291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be9d4fbf7ec17f9514254bcca1b63f7d,},Annotations:map[st
ring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4,PodSandboxId:0a660b7b688faaa376e5b891709378c5b72c2a909aca0846087ac335c41d32e0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728918120647059319,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-201291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ea78d3f249f4ed9fd101799a78d
3e57,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f,PodSandboxId:9cf4262d69c300e4fd67e0da5d27d90a583fed442f31b9849ac60444feb6eccd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728918120661993982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-201291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba36f4d10fff5c44627000ddc1e694
71,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0143c3ba-dec0-4ce8-b546-f4876f4ed803 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:15:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:15:35.850367202Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9d410e18-65a4-4547-b240-89768b2800e5 name=/runtime.v1.RuntimeService/Version
	Oct 14 15:15:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:15:35.850445088Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9d410e18-65a4-4547-b240-89768b2800e5 name=/runtime.v1.RuntimeService/Version
	Oct 14 15:15:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:15:35.852188864Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f46d010b-66dc-451b-a4b2-34e53b58789f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:15:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:15:35.852605090Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918935852584361,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f46d010b-66dc-451b-a4b2-34e53b58789f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:15:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:15:35.853123744Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=12bf5afd-cc0f-48d6-ba6f-229c64a541f5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:15:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:15:35.853184540Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=12bf5afd-cc0f-48d6-ba6f-229c64a541f5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:15:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:15:35.853371467Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81,PodSandboxId:0590d28e358c8bc722e9016e9814871f9cf67cef6809256acd1cc1c1e2b232a6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728918155199929716,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62925b5e-ec1d-4d5b-aa70-a4fc555db52d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1d58f06c02f6e31d834478886bd991508a4c2d9ad0258aa93225671f6be6f38,PodSandboxId:dfcbb62af0cc631a713d54cf52d9adb4854a7c54c6f6ccabdb5f541e2ac16c06,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728918142397616012,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 73313975-3d02-4629-9437-ec78b344b297,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1,PodSandboxId:106c488f9ab21922d4afc6f3b4b3bbcb764633957969a4df9c459a1bc760a32e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728918140174257069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-994hx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0291ce4-5503-4bb1-8e36-d956b115c3ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42,PodSandboxId:22a3d648f9dff8c686309f7ad847156012da9c4532a6b36eb70f1ba51aa68ccd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728918124372675326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rh82t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dcd3c39-1
bfe-40ac-a012-ea17ea1dfb6d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076,PodSandboxId:0590d28e358c8bc722e9016e9814871f9cf67cef6809256acd1cc1c1e2b232a6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728918124367753623,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62925b5e-ec1d-4d5b-aa70
-a4fc555db52d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa,PodSandboxId:4cbb5db056a6dba0383ea5131f1101858340f24129fc12defb065e22b55f928d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728918120665314822,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-201291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 372e03020d4971676e1f7
f514f4974ea,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69,PodSandboxId:d7742a4d0ed600db11f7c8793ea86ae3867317ab9d22681470466204d33be567,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728918120678989754,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-201291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be9d4fbf7ec17f9514254bcca1b63f7d,},Annotations:map[st
ring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4,PodSandboxId:0a660b7b688faaa376e5b891709378c5b72c2a909aca0846087ac335c41d32e0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728918120647059319,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-201291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ea78d3f249f4ed9fd101799a78d
3e57,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f,PodSandboxId:9cf4262d69c300e4fd67e0da5d27d90a583fed442f31b9849ac60444feb6eccd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728918120661993982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-201291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba36f4d10fff5c44627000ddc1e694
71,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=12bf5afd-cc0f-48d6-ba6f-229c64a541f5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:15:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:15:35.888134830Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=512b3da8-7140-4faa-8bf0-c29573e048ea name=/runtime.v1.RuntimeService/Version
	Oct 14 15:15:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:15:35.888226305Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=512b3da8-7140-4faa-8bf0-c29573e048ea name=/runtime.v1.RuntimeService/Version
	Oct 14 15:15:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:15:35.889471493Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=50dae9ff-e33f-4a38-916e-d6e9742340f2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:15:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:15:35.889846630Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918935889826552,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=50dae9ff-e33f-4a38-916e-d6e9742340f2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:15:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:15:35.890484816Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7dea54f0-23de-4f18-9b09-25aa383be210 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:15:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:15:35.890545219Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7dea54f0-23de-4f18-9b09-25aa383be210 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:15:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:15:35.890738071Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81,PodSandboxId:0590d28e358c8bc722e9016e9814871f9cf67cef6809256acd1cc1c1e2b232a6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728918155199929716,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62925b5e-ec1d-4d5b-aa70-a4fc555db52d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1d58f06c02f6e31d834478886bd991508a4c2d9ad0258aa93225671f6be6f38,PodSandboxId:dfcbb62af0cc631a713d54cf52d9adb4854a7c54c6f6ccabdb5f541e2ac16c06,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728918142397616012,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 73313975-3d02-4629-9437-ec78b344b297,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1,PodSandboxId:106c488f9ab21922d4afc6f3b4b3bbcb764633957969a4df9c459a1bc760a32e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728918140174257069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-994hx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0291ce4-5503-4bb1-8e36-d956b115c3ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42,PodSandboxId:22a3d648f9dff8c686309f7ad847156012da9c4532a6b36eb70f1ba51aa68ccd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728918124372675326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rh82t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dcd3c39-1
bfe-40ac-a012-ea17ea1dfb6d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076,PodSandboxId:0590d28e358c8bc722e9016e9814871f9cf67cef6809256acd1cc1c1e2b232a6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728918124367753623,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62925b5e-ec1d-4d5b-aa70
-a4fc555db52d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa,PodSandboxId:4cbb5db056a6dba0383ea5131f1101858340f24129fc12defb065e22b55f928d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728918120665314822,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-201291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 372e03020d4971676e1f7
f514f4974ea,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69,PodSandboxId:d7742a4d0ed600db11f7c8793ea86ae3867317ab9d22681470466204d33be567,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728918120678989754,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-201291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be9d4fbf7ec17f9514254bcca1b63f7d,},Annotations:map[st
ring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4,PodSandboxId:0a660b7b688faaa376e5b891709378c5b72c2a909aca0846087ac335c41d32e0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728918120647059319,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-201291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ea78d3f249f4ed9fd101799a78d
3e57,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f,PodSandboxId:9cf4262d69c300e4fd67e0da5d27d90a583fed442f31b9849ac60444feb6eccd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728918120661993982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-201291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba36f4d10fff5c44627000ddc1e694
71,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7dea54f0-23de-4f18-9b09-25aa383be210 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	54da9997e909c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       3                   0590d28e358c8       storage-provisioner
	d1d58f06c02f6       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   dfcbb62af0cc6       busybox
	6e3748f01b40b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   106c488f9ab21       coredns-7c65d6cfc9-994hx
	8562700fa08dc       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      13 minutes ago      Running             kube-proxy                1                   22a3d648f9dff       kube-proxy-rh82t
	48bc323790016       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   0590d28e358c8       storage-provisioner
	0aaa149381e52       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   d7742a4d0ed60       etcd-default-k8s-diff-port-201291
	be2f06f84e6b5       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      13 minutes ago      Running             kube-scheduler            1                   4cbb5db056a6d       kube-scheduler-default-k8s-diff-port-201291
	a2df52bb84059       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      13 minutes ago      Running             kube-apiserver            1                   9cf4262d69c30       kube-apiserver-default-k8s-diff-port-201291
	7cfcaa231ef94       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      13 minutes ago      Running             kube-controller-manager   1                   0a660b7b688fa       kube-controller-manager-default-k8s-diff-port-201291
	
	
	==> coredns [6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:39059 - 41192 "HINFO IN 4260166790663280947.876893321338102758. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010564947s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-201291
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-201291
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=default-k8s-diff-port-201291
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_14T14_54_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 14:54:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-201291
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 15:15:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 15:12:46 +0000   Mon, 14 Oct 2024 14:54:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 15:12:46 +0000   Mon, 14 Oct 2024 14:54:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 15:12:46 +0000   Mon, 14 Oct 2024 14:54:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 15:12:46 +0000   Mon, 14 Oct 2024 15:02:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.128
	  Hostname:    default-k8s-diff-port-201291
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f564671d50a747d2bc6d8c9c9f526232
	  System UUID:                f564671d-50a7-47d2-bc6d-8c9c9f526232
	  Boot ID:                    e3eff562-b446-40cd-8029-d7dae929ab92
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 coredns-7c65d6cfc9-994hx                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-default-k8s-diff-port-201291                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-201291             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-201291    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-rh82t                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-201291             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-6867b74b74-bcrqs                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         20m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-201291 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-201291 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-201291 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-201291 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-201291 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-201291 status is now: NodeHasSufficientPID
	  Normal  NodeReady                21m                kubelet          Node default-k8s-diff-port-201291 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-201291 event: Registered Node default-k8s-diff-port-201291 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-201291 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-201291 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-201291 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-201291 event: Registered Node default-k8s-diff-port-201291 in Controller
	
	
	==> dmesg <==
	[Oct14 15:01] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051014] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.053472] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.982749] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.645472] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.619559] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.355308] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.060196] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060724] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.224205] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.136829] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.316922] systemd-fstab-generator[695]: Ignoring "noauto" option for root device
	[  +4.303840] systemd-fstab-generator[788]: Ignoring "noauto" option for root device
	[  +0.060399] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.936699] systemd-fstab-generator[906]: Ignoring "noauto" option for root device
	[Oct14 15:02] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.945044] systemd-fstab-generator[1542]: Ignoring "noauto" option for root device
	[  +4.781615] kauditd_printk_skb: 64 callbacks suppressed
	[  +7.803811] kauditd_printk_skb: 11 callbacks suppressed
	[ +15.422858] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69] <==
	{"level":"info","ts":"2024-10-14T15:02:02.323187Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b7d726258a4a2d44 received MsgPreVoteResp from b7d726258a4a2d44 at term 2"}
	{"level":"info","ts":"2024-10-14T15:02:02.323206Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b7d726258a4a2d44 became candidate at term 3"}
	{"level":"info","ts":"2024-10-14T15:02:02.323237Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b7d726258a4a2d44 received MsgVoteResp from b7d726258a4a2d44 at term 3"}
	{"level":"info","ts":"2024-10-14T15:02:02.323253Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b7d726258a4a2d44 became leader at term 3"}
	{"level":"info","ts":"2024-10-14T15:02:02.323264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b7d726258a4a2d44 elected leader b7d726258a4a2d44 at term 3"}
	{"level":"info","ts":"2024-10-14T15:02:02.325768Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b7d726258a4a2d44","local-member-attributes":"{Name:default-k8s-diff-port-201291 ClientURLs:[https://192.168.50.128:2379]}","request-path":"/0/members/b7d726258a4a2d44/attributes","cluster-id":"cd7de093209a1f5d","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-14T15:02:02.326001Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-14T15:02:02.326394Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-14T15:02:02.326824Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-14T15:02:02.326871Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-14T15:02:02.327528Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-14T15:02:02.327734Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-14T15:02:02.328445Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-14T15:02:02.329352Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.128:2379"}
	{"level":"warn","ts":"2024-10-14T15:02:18.921431Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"162.404591ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3261893158185404564 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-201291\" mod_revision:629 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-201291\" value_size:6828 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-201291\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-10-14T15:02:18.921560Z","caller":"traceutil/trace.go:171","msg":"trace[919984271] linearizableReadLoop","detail":"{readStateIndex:671; appliedIndex:670; }","duration":"147.617318ms","start":"2024-10-14T15:02:18.773928Z","end":"2024-10-14T15:02:18.921545Z","steps":["trace[919984271] 'read index received'  (duration: 26.001µs)","trace[919984271] 'applied index is now lower than readState.Index'  (duration: 147.590247ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-14T15:02:18.921782Z","caller":"traceutil/trace.go:171","msg":"trace[1500510879] transaction","detail":"{read_only:false; response_revision:630; number_of_response:1; }","duration":"309.236388ms","start":"2024-10-14T15:02:18.612534Z","end":"2024-10-14T15:02:18.921771Z","steps":["trace[1500510879] 'process raft request'  (duration: 145.776831ms)","trace[1500510879] 'compare'  (duration: 162.138668ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-14T15:02:18.921921Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-14T15:02:18.612516Z","time spent":"309.322705ms","remote":"127.0.0.1:56754","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6906,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-201291\" mod_revision:629 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-201291\" value_size:6828 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-201291\" > >"}
	{"level":"warn","ts":"2024-10-14T15:02:19.083280Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.024219ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3261893158185404566 > lease_revoke:<id:2d44928b85ec0430>","response":"size:29"}
	{"level":"info","ts":"2024-10-14T15:02:19.083385Z","caller":"traceutil/trace.go:171","msg":"trace[1642204654] linearizableReadLoop","detail":"{readStateIndex:672; appliedIndex:671; }","duration":"154.185789ms","start":"2024-10-14T15:02:18.929184Z","end":"2024-10-14T15:02:19.083370Z","steps":["trace[1642204654] 'read index received'  (duration: 47.029692ms)","trace[1642204654] 'applied index is now lower than readState.Index'  (duration: 107.155032ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-14T15:02:19.083509Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.310259ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-201291\" ","response":"range_response_count:1 size:5537"}
	{"level":"info","ts":"2024-10-14T15:02:19.083534Z","caller":"traceutil/trace.go:171","msg":"trace[654780904] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-201291; range_end:; response_count:1; response_revision:630; }","duration":"154.345312ms","start":"2024-10-14T15:02:18.929181Z","end":"2024-10-14T15:02:19.083526Z","steps":["trace[654780904] 'agreement among raft nodes before linearized reading'  (duration: 154.229017ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T15:12:02.361411Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":889}
	{"level":"info","ts":"2024-10-14T15:12:02.379741Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":889,"took":"17.899369ms","hash":3738844866,"current-db-size-bytes":2883584,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":2883584,"current-db-size-in-use":"2.9 MB"}
	{"level":"info","ts":"2024-10-14T15:12:02.379824Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3738844866,"revision":889,"compact-revision":-1}
	
	
	==> kernel <==
	 15:15:36 up 14 min,  0 users,  load average: 0.80, 0.33, 0.16
	Linux default-k8s-diff-port-201291 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f] <==
	W1014 15:12:04.751490       1 handler_proxy.go:99] no RequestInfo found in the context
	E1014 15:12:04.751740       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1014 15:12:04.752888       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1014 15:12:04.752928       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1014 15:13:04.753197       1 handler_proxy.go:99] no RequestInfo found in the context
	W1014 15:13:04.753194       1 handler_proxy.go:99] no RequestInfo found in the context
	E1014 15:13:04.753579       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1014 15:13:04.753588       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1014 15:13:04.754730       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1014 15:13:04.754789       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1014 15:15:04.755977       1 handler_proxy.go:99] no RequestInfo found in the context
	E1014 15:15:04.756190       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1014 15:15:04.755973       1 handler_proxy.go:99] no RequestInfo found in the context
	E1014 15:15:04.756242       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1014 15:15:04.757468       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1014 15:15:04.757543       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4] <==
	E1014 15:10:07.366955       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:10:07.825824       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:10:37.373155       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:10:37.833061       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:11:07.380323       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:11:07.841797       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:11:37.386917       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:11:37.849644       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:12:07.393332       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:12:07.858378       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:12:37.399295       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:12:37.865711       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1014 15:12:46.642073       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-201291"
	E1014 15:13:07.406234       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:13:07.874447       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1014 15:13:11.002735       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="213.502µs"
	I1014 15:13:25.000264       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="83.089µs"
	E1014 15:13:37.411975       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:13:37.882252       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:14:07.418636       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:14:07.889549       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:14:37.425405       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:14:37.898624       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:15:07.431525       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:15:07.905772       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1014 15:02:04.774347       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1014 15:02:04.839518       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.128"]
	E1014 15:02:04.840246       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 15:02:04.941234       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1014 15:02:04.941304       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 15:02:04.941336       1 server_linux.go:169] "Using iptables Proxier"
	I1014 15:02:04.966379       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 15:02:04.973423       1 server.go:483] "Version info" version="v1.31.1"
	I1014 15:02:04.973500       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 15:02:04.975755       1 config.go:105] "Starting endpoint slice config controller"
	I1014 15:02:04.981821       1 config.go:328] "Starting node config controller"
	I1014 15:02:04.981907       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 15:02:04.983189       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 15:02:04.983309       1 config.go:199] "Starting service config controller"
	I1014 15:02:04.983333       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 15:02:05.084615       1 shared_informer.go:320] Caches are synced for service config
	I1014 15:02:05.084722       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1014 15:02:05.086642       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa] <==
	I1014 15:02:01.973523       1 serving.go:386] Generated self-signed cert in-memory
	W1014 15:02:03.685310       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1014 15:02:03.685383       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1014 15:02:03.685398       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1014 15:02:03.685406       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1014 15:02:03.745689       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1014 15:02:03.745786       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 15:02:03.749145       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 15:02:03.749576       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1014 15:02:03.749966       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1014 15:02:03.750790       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 15:02:03.850595       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 14 15:14:27 default-k8s-diff-port-201291 kubelet[913]: E1014 15:14:27.989183     913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-bcrqs" podUID="508697cd-cf31-4078-8985-5c0b77966695"
	Oct 14 15:14:30 default-k8s-diff-port-201291 kubelet[913]: E1014 15:14:30.194789     913 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918870194446624,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:14:30 default-k8s-diff-port-201291 kubelet[913]: E1014 15:14:30.194851     913 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918870194446624,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:14:40 default-k8s-diff-port-201291 kubelet[913]: E1014 15:14:40.196589     913 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918880196168415,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:14:40 default-k8s-diff-port-201291 kubelet[913]: E1014 15:14:40.196869     913 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918880196168415,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:14:42 default-k8s-diff-port-201291 kubelet[913]: E1014 15:14:42.987205     913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-bcrqs" podUID="508697cd-cf31-4078-8985-5c0b77966695"
	Oct 14 15:14:50 default-k8s-diff-port-201291 kubelet[913]: E1014 15:14:50.199303     913 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918890198799230,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:14:50 default-k8s-diff-port-201291 kubelet[913]: E1014 15:14:50.199381     913 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918890198799230,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:14:53 default-k8s-diff-port-201291 kubelet[913]: E1014 15:14:53.987571     913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-bcrqs" podUID="508697cd-cf31-4078-8985-5c0b77966695"
	Oct 14 15:15:00 default-k8s-diff-port-201291 kubelet[913]: E1014 15:15:00.018400     913 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 15:15:00 default-k8s-diff-port-201291 kubelet[913]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 15:15:00 default-k8s-diff-port-201291 kubelet[913]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 15:15:00 default-k8s-diff-port-201291 kubelet[913]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 15:15:00 default-k8s-diff-port-201291 kubelet[913]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 15:15:00 default-k8s-diff-port-201291 kubelet[913]: E1014 15:15:00.201536     913 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918900201205450,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:15:00 default-k8s-diff-port-201291 kubelet[913]: E1014 15:15:00.201783     913 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918900201205450,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:15:08 default-k8s-diff-port-201291 kubelet[913]: E1014 15:15:08.986463     913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-bcrqs" podUID="508697cd-cf31-4078-8985-5c0b77966695"
	Oct 14 15:15:10 default-k8s-diff-port-201291 kubelet[913]: E1014 15:15:10.203540     913 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918910203201951,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:15:10 default-k8s-diff-port-201291 kubelet[913]: E1014 15:15:10.203977     913 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918910203201951,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:15:20 default-k8s-diff-port-201291 kubelet[913]: E1014 15:15:20.206390     913 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918920205972718,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:15:20 default-k8s-diff-port-201291 kubelet[913]: E1014 15:15:20.206474     913 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918920205972718,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:15:23 default-k8s-diff-port-201291 kubelet[913]: E1014 15:15:23.988302     913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-bcrqs" podUID="508697cd-cf31-4078-8985-5c0b77966695"
	Oct 14 15:15:30 default-k8s-diff-port-201291 kubelet[913]: E1014 15:15:30.208620     913 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918930208209340,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:15:30 default-k8s-diff-port-201291 kubelet[913]: E1014 15:15:30.208672     913 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918930208209340,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:15:34 default-k8s-diff-port-201291 kubelet[913]: E1014 15:15:34.987878     913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-bcrqs" podUID="508697cd-cf31-4078-8985-5c0b77966695"
	
	
	==> storage-provisioner [48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076] <==
	I1014 15:02:04.510565       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1014 15:02:34.514449       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81] <==
	I1014 15:02:35.292622       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1014 15:02:35.310275       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1014 15:02:35.310510       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1014 15:02:52.717439       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1014 15:02:52.717769       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-201291_4291a378-32ef-499c-b603-0b1c483483cb!
	I1014 15:02:52.722257       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7cf6f0ca-8e1e-43ca-81cd-d0b61c17bc59", APIVersion:"v1", ResourceVersion:"669", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-201291_4291a378-32ef-499c-b603-0b1c483483cb became leader
	I1014 15:02:52.817947       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-201291_4291a378-32ef-499c-b603-0b1c483483cb!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-201291 -n default-k8s-diff-port-201291
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-201291 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-bcrqs
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-201291 describe pod metrics-server-6867b74b74-bcrqs
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-201291 describe pod metrics-server-6867b74b74-bcrqs: exit status 1 (62.720964ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-bcrqs" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-201291 describe pod metrics-server-6867b74b74-bcrqs: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1014 15:07:01.835977   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/enable-default-cni-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 15:07:38.241008   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 15:07:49.187123   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/bridge-517678/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-989166 -n embed-certs-989166
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-14 15:15:51.417587595 +0000 UTC m=+5835.698935939
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-989166 -n embed-certs-989166
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-989166 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-989166 logs -n 25: (2.106226133s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-517678 sudo cat                              | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-517678 sudo                                  | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-517678 sudo                                  | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-517678 sudo                                  | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-517678 sudo find                             | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-517678 sudo crio                             | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-517678                                       | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	| delete  | -p                                                     | disable-driver-mounts-887610 | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | disable-driver-mounts-887610                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:55 UTC |
	|         | default-k8s-diff-port-201291                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-813300             | no-preload-813300            | jenkins | v1.34.0 | 14 Oct 24 14:54 UTC | 14 Oct 24 14:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-813300                                   | no-preload-813300            | jenkins | v1.34.0 | 14 Oct 24 14:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-989166            | embed-certs-989166           | jenkins | v1.34.0 | 14 Oct 24 14:54 UTC | 14 Oct 24 14:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-989166                                  | embed-certs-989166           | jenkins | v1.34.0 | 14 Oct 24 14:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-201291  | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:55 UTC | 14 Oct 24 14:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:55 UTC |                     |
	|         | default-k8s-diff-port-201291                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-813300                  | no-preload-813300            | jenkins | v1.34.0 | 14 Oct 24 14:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-813300                                   | no-preload-813300            | jenkins | v1.34.0 | 14 Oct 24 14:56 UTC | 14 Oct 24 15:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-399767        | old-k8s-version-399767       | jenkins | v1.34.0 | 14 Oct 24 14:56 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-989166                 | embed-certs-989166           | jenkins | v1.34.0 | 14 Oct 24 14:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-989166                                  | embed-certs-989166           | jenkins | v1.34.0 | 14 Oct 24 14:57 UTC | 14 Oct 24 15:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-201291       | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:57 UTC | 14 Oct 24 15:06 UTC |
	|         | default-k8s-diff-port-201291                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-399767                              | old-k8s-version-399767       | jenkins | v1.34.0 | 14 Oct 24 14:58 UTC | 14 Oct 24 14:58 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-399767             | old-k8s-version-399767       | jenkins | v1.34.0 | 14 Oct 24 14:58 UTC | 14 Oct 24 14:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-399767                              | old-k8s-version-399767       | jenkins | v1.34.0 | 14 Oct 24 14:58 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 14:58:18
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 14:58:18.000027   72639 out.go:345] Setting OutFile to fd 1 ...
	I1014 14:58:18.000165   72639 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:58:18.000176   72639 out.go:358] Setting ErrFile to fd 2...
	I1014 14:58:18.000189   72639 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:58:18.000390   72639 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
	I1014 14:58:18.000911   72639 out.go:352] Setting JSON to false
	I1014 14:58:18.001828   72639 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6048,"bootTime":1728911850,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 14:58:18.001919   72639 start.go:139] virtualization: kvm guest
	I1014 14:58:18.004056   72639 out.go:177] * [old-k8s-version-399767] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1014 14:58:18.005382   72639 notify.go:220] Checking for updates...
	I1014 14:58:18.005437   72639 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 14:58:18.006939   72639 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 14:58:18.008275   72639 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 14:58:18.009565   72639 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 14:58:18.010773   72639 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 14:58:18.011941   72639 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 14:58:18.013472   72639 config.go:182] Loaded profile config "old-k8s-version-399767": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1014 14:58:18.013833   72639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:58:18.013892   72639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:58:18.028372   72639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44497
	I1014 14:58:18.028786   72639 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:58:18.029355   72639 main.go:141] libmachine: Using API Version  1
	I1014 14:58:18.029375   72639 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:58:18.029671   72639 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:58:18.029827   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 14:58:18.031644   72639 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1014 14:58:18.033229   72639 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 14:58:18.033524   72639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:58:18.033565   72639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:58:18.048210   72639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34273
	I1014 14:58:18.048620   72639 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:58:18.049080   72639 main.go:141] libmachine: Using API Version  1
	I1014 14:58:18.049102   72639 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:58:18.049377   72639 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:58:18.049550   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 14:58:18.084664   72639 out.go:177] * Using the kvm2 driver based on existing profile
	I1014 14:58:18.085942   72639 start.go:297] selected driver: kvm2
	I1014 14:58:18.085952   72639 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-399767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-399767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.138 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 14:58:18.086042   72639 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 14:58:18.086707   72639 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 14:58:18.086795   72639 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19790-7836/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 14:58:18.101802   72639 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1014 14:58:18.102194   72639 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 14:58:18.102224   72639 cni.go:84] Creating CNI manager for ""
	I1014 14:58:18.102263   72639 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 14:58:18.102315   72639 start.go:340] cluster config:
	{Name:old-k8s-version-399767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-399767 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.138 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 14:58:18.102441   72639 iso.go:125] acquiring lock: {Name:mk2e2e780b05ead4007a93e6b56d28c6081926bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 14:58:18.105418   72639 out.go:177] * Starting "old-k8s-version-399767" primary control-plane node in "old-k8s-version-399767" cluster
	I1014 14:58:16.182868   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:18.106656   72639 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1014 14:58:18.106696   72639 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1014 14:58:18.106708   72639 cache.go:56] Caching tarball of preloaded images
	I1014 14:58:18.106790   72639 preload.go:172] Found /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 14:58:18.106800   72639 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1014 14:58:18.106889   72639 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/config.json ...
	I1014 14:58:18.107063   72639 start.go:360] acquireMachinesLock for old-k8s-version-399767: {Name:mk0b928e3a7c606d1470b2064477a22c5e59969d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 14:58:22.262902   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:25.334877   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:31.414867   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:34.486863   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:40.566883   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:43.638929   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:49.718856   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:52.790946   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:58.870883   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:01.942844   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:08.022831   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:11.094893   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:17.174897   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:20.246818   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:26.326911   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:29.398852   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:35.478877   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:38.550829   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:44.630857   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:47.702856   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:53.782842   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:56.854890   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:02.934894   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:06.006879   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:12.086905   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:15.158856   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:21.238905   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:24.310889   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:30.390878   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:33.462909   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:39.542866   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:42.614929   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:48.694859   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:51.766865   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:57.846913   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:01:00.918859   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:01:06.998892   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:01:10.070810   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:01:13.075950   72173 start.go:364] duration metric: took 3m43.687804446s to acquireMachinesLock for "embed-certs-989166"
	I1014 15:01:13.076005   72173 start.go:96] Skipping create...Using existing machine configuration
	I1014 15:01:13.076011   72173 fix.go:54] fixHost starting: 
	I1014 15:01:13.076341   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:01:13.076386   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:01:13.092168   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41771
	I1014 15:01:13.092686   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:01:13.093180   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:01:13.093204   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:01:13.093560   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:01:13.093749   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:13.093889   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetState
	I1014 15:01:13.095639   72173 fix.go:112] recreateIfNeeded on embed-certs-989166: state=Stopped err=<nil>
	I1014 15:01:13.095665   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	W1014 15:01:13.095827   72173 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 15:01:13.097909   72173 out.go:177] * Restarting existing kvm2 VM for "embed-certs-989166" ...
	I1014 15:01:13.099253   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Start
	I1014 15:01:13.099433   72173 main.go:141] libmachine: (embed-certs-989166) Ensuring networks are active...
	I1014 15:01:13.100328   72173 main.go:141] libmachine: (embed-certs-989166) Ensuring network default is active
	I1014 15:01:13.100683   72173 main.go:141] libmachine: (embed-certs-989166) Ensuring network mk-embed-certs-989166 is active
	I1014 15:01:13.101062   72173 main.go:141] libmachine: (embed-certs-989166) Getting domain xml...
	I1014 15:01:13.101867   72173 main.go:141] libmachine: (embed-certs-989166) Creating domain...
	I1014 15:01:13.073323   71679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 15:01:13.073356   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetMachineName
	I1014 15:01:13.073658   71679 buildroot.go:166] provisioning hostname "no-preload-813300"
	I1014 15:01:13.073682   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetMachineName
	I1014 15:01:13.073854   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:01:13.075825   71679 machine.go:96] duration metric: took 4m37.425006s to provisionDockerMachine
	I1014 15:01:13.075866   71679 fix.go:56] duration metric: took 4m37.446829923s for fixHost
	I1014 15:01:13.075872   71679 start.go:83] releasing machines lock for "no-preload-813300", held for 4m37.446848059s
	W1014 15:01:13.075889   71679 start.go:714] error starting host: provision: host is not running
	W1014 15:01:13.075983   71679 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1014 15:01:13.075992   71679 start.go:729] Will try again in 5 seconds ...
	I1014 15:01:14.319338   72173 main.go:141] libmachine: (embed-certs-989166) Waiting to get IP...
	I1014 15:01:14.320167   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:14.320582   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:14.320651   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:14.320577   73268 retry.go:31] will retry after 213.073722ms: waiting for machine to come up
	I1014 15:01:14.534913   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:14.535353   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:14.535375   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:14.535306   73268 retry.go:31] will retry after 316.205029ms: waiting for machine to come up
	I1014 15:01:14.852769   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:14.853201   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:14.853261   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:14.853201   73268 retry.go:31] will retry after 399.414867ms: waiting for machine to come up
	I1014 15:01:15.253657   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:15.253955   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:15.253979   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:15.253917   73268 retry.go:31] will retry after 537.097034ms: waiting for machine to come up
	I1014 15:01:15.792362   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:15.792736   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:15.792763   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:15.792703   73268 retry.go:31] will retry after 594.582114ms: waiting for machine to come up
	I1014 15:01:16.388419   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:16.388838   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:16.388869   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:16.388793   73268 retry.go:31] will retry after 814.814512ms: waiting for machine to come up
	I1014 15:01:17.204791   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:17.205229   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:17.205255   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:17.205176   73268 retry.go:31] will retry after 971.673961ms: waiting for machine to come up
	I1014 15:01:18.178701   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:18.179100   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:18.179130   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:18.179048   73268 retry.go:31] will retry after 941.576822ms: waiting for machine to come up
	I1014 15:01:19.122097   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:19.122488   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:19.122514   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:19.122453   73268 retry.go:31] will retry after 1.5308999s: waiting for machine to come up
	I1014 15:01:18.077601   71679 start.go:360] acquireMachinesLock for no-preload-813300: {Name:mk0b928e3a7c606d1470b2064477a22c5e59969d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 15:01:20.655098   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:20.655524   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:20.655550   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:20.655475   73268 retry.go:31] will retry after 1.590510545s: waiting for machine to come up
	I1014 15:01:22.248128   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:22.248551   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:22.248572   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:22.248511   73268 retry.go:31] will retry after 1.965898839s: waiting for machine to come up
	I1014 15:01:24.215742   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:24.216187   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:24.216240   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:24.216136   73268 retry.go:31] will retry after 3.476459931s: waiting for machine to come up
	I1014 15:01:27.696804   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:27.697201   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:27.697254   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:27.697175   73268 retry.go:31] will retry after 3.212757582s: waiting for machine to come up
	I1014 15:01:32.235659   72390 start.go:364] duration metric: took 3m35.715993521s to acquireMachinesLock for "default-k8s-diff-port-201291"
	I1014 15:01:32.235710   72390 start.go:96] Skipping create...Using existing machine configuration
	I1014 15:01:32.235718   72390 fix.go:54] fixHost starting: 
	I1014 15:01:32.236084   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:01:32.236134   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:01:32.253294   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46045
	I1014 15:01:32.253760   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:01:32.254255   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:01:32.254275   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:01:32.254616   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:01:32.254797   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:32.254973   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetState
	I1014 15:01:32.256494   72390 fix.go:112] recreateIfNeeded on default-k8s-diff-port-201291: state=Stopped err=<nil>
	I1014 15:01:32.256523   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	W1014 15:01:32.256683   72390 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 15:01:32.258989   72390 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-201291" ...
	I1014 15:01:30.911781   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:30.912283   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has current primary IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:30.912313   72173 main.go:141] libmachine: (embed-certs-989166) Found IP for machine: 192.168.39.41
	I1014 15:01:30.912331   72173 main.go:141] libmachine: (embed-certs-989166) Reserving static IP address...
	I1014 15:01:30.912771   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "embed-certs-989166", mac: "52:54:00:ee:96:19", ip: "192.168.39.41"} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:30.912799   72173 main.go:141] libmachine: (embed-certs-989166) DBG | skip adding static IP to network mk-embed-certs-989166 - found existing host DHCP lease matching {name: "embed-certs-989166", mac: "52:54:00:ee:96:19", ip: "192.168.39.41"}
	I1014 15:01:30.912806   72173 main.go:141] libmachine: (embed-certs-989166) Reserved static IP address: 192.168.39.41
	I1014 15:01:30.912815   72173 main.go:141] libmachine: (embed-certs-989166) Waiting for SSH to be available...
	I1014 15:01:30.912822   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Getting to WaitForSSH function...
	I1014 15:01:30.914919   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:30.915273   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:30.915310   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:30.915392   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Using SSH client type: external
	I1014 15:01:30.915414   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa (-rw-------)
	I1014 15:01:30.915465   72173 main.go:141] libmachine: (embed-certs-989166) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.41 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 15:01:30.915489   72173 main.go:141] libmachine: (embed-certs-989166) DBG | About to run SSH command:
	I1014 15:01:30.915503   72173 main.go:141] libmachine: (embed-certs-989166) DBG | exit 0
	I1014 15:01:31.042620   72173 main.go:141] libmachine: (embed-certs-989166) DBG | SSH cmd err, output: <nil>: 
	I1014 15:01:31.043061   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetConfigRaw
	I1014 15:01:31.043675   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetIP
	I1014 15:01:31.046338   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.046679   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.046720   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.046941   72173 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/config.json ...
	I1014 15:01:31.047132   72173 machine.go:93] provisionDockerMachine start ...
	I1014 15:01:31.047149   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:31.047348   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.049453   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.049835   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.049857   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.049978   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:31.050147   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.050282   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.050419   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:31.050573   72173 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:31.050814   72173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1014 15:01:31.050828   72173 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 15:01:31.163270   72173 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 15:01:31.163306   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetMachineName
	I1014 15:01:31.163614   72173 buildroot.go:166] provisioning hostname "embed-certs-989166"
	I1014 15:01:31.163644   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetMachineName
	I1014 15:01:31.163821   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.166684   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.167009   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.167040   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.167157   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:31.167416   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.167582   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.167718   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:31.167857   72173 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:31.168025   72173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1014 15:01:31.168040   72173 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-989166 && echo "embed-certs-989166" | sudo tee /etc/hostname
	I1014 15:01:31.292369   72173 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-989166
	
	I1014 15:01:31.292405   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.295057   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.295425   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.295449   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.295713   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:31.295915   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.296076   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.296220   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:31.296395   72173 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:31.296552   72173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1014 15:01:31.296567   72173 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-989166' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-989166/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-989166' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 15:01:31.411080   72173 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 15:01:31.411112   72173 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 15:01:31.411131   72173 buildroot.go:174] setting up certificates
	I1014 15:01:31.411142   72173 provision.go:84] configureAuth start
	I1014 15:01:31.411150   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetMachineName
	I1014 15:01:31.411396   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetIP
	I1014 15:01:31.413972   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.414319   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.414341   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.414502   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.416775   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.417092   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.417113   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.417278   72173 provision.go:143] copyHostCerts
	I1014 15:01:31.417340   72173 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 15:01:31.417353   72173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 15:01:31.417437   72173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 15:01:31.417549   72173 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 15:01:31.417559   72173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 15:01:31.417600   72173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 15:01:31.417677   72173 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 15:01:31.417687   72173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 15:01:31.417721   72173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 15:01:31.417788   72173 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.embed-certs-989166 san=[127.0.0.1 192.168.39.41 embed-certs-989166 localhost minikube]
	I1014 15:01:31.599973   72173 provision.go:177] copyRemoteCerts
	I1014 15:01:31.600034   72173 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 15:01:31.600060   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.602964   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.603270   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.603296   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.603502   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:31.603665   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.603821   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:31.603949   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:01:31.688890   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 15:01:31.713474   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1014 15:01:31.737692   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 15:01:31.760955   72173 provision.go:87] duration metric: took 349.799595ms to configureAuth
	I1014 15:01:31.760986   72173 buildroot.go:189] setting minikube options for container-runtime
	I1014 15:01:31.761172   72173 config.go:182] Loaded profile config "embed-certs-989166": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 15:01:31.761244   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.763800   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.764149   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.764181   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.764339   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:31.764494   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.764636   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.764732   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:31.764852   72173 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:31.765002   72173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1014 15:01:31.765016   72173 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 15:01:31.992783   72173 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 15:01:31.992811   72173 machine.go:96] duration metric: took 945.667058ms to provisionDockerMachine
	I1014 15:01:31.992823   72173 start.go:293] postStartSetup for "embed-certs-989166" (driver="kvm2")
	I1014 15:01:31.992834   72173 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 15:01:31.992848   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:31.993203   72173 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 15:01:31.993235   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.995966   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.996380   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.996418   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.996538   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:31.996714   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.996864   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:31.997003   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:01:32.081931   72173 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 15:01:32.086191   72173 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 15:01:32.086218   72173 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 15:01:32.086287   72173 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 15:01:32.086368   72173 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 15:01:32.086455   72173 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 15:01:32.096414   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:01:32.120348   72173 start.go:296] duration metric: took 127.509685ms for postStartSetup
	I1014 15:01:32.120392   72173 fix.go:56] duration metric: took 19.044380323s for fixHost
	I1014 15:01:32.120412   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:32.123024   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.123435   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:32.123465   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.123649   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:32.123832   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:32.123986   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:32.124152   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:32.124288   72173 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:32.124487   72173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1014 15:01:32.124502   72173 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 15:01:32.235487   72173 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728918092.208431219
	
	I1014 15:01:32.235513   72173 fix.go:216] guest clock: 1728918092.208431219
	I1014 15:01:32.235522   72173 fix.go:229] Guest: 2024-10-14 15:01:32.208431219 +0000 UTC Remote: 2024-10-14 15:01:32.12039587 +0000 UTC m=+242.874215269 (delta=88.035349ms)
	I1014 15:01:32.235559   72173 fix.go:200] guest clock delta is within tolerance: 88.035349ms
	I1014 15:01:32.235572   72173 start.go:83] releasing machines lock for "embed-certs-989166", held for 19.159587089s
	I1014 15:01:32.235600   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:32.235877   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetIP
	I1014 15:01:32.238642   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.238995   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:32.239025   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.239175   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:32.239719   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:32.239891   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:32.239978   72173 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 15:01:32.240031   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:32.240091   72173 ssh_runner.go:195] Run: cat /version.json
	I1014 15:01:32.240115   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:32.242742   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.243102   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:32.243128   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.243177   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.243275   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:32.243482   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:32.243653   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:32.243664   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:32.243676   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.243811   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:32.243822   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:01:32.243929   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:32.244050   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:32.244168   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:01:32.357542   72173 ssh_runner.go:195] Run: systemctl --version
	I1014 15:01:32.365113   72173 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 15:01:32.510557   72173 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 15:01:32.516545   72173 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 15:01:32.516628   72173 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 15:01:32.533449   72173 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 15:01:32.533473   72173 start.go:495] detecting cgroup driver to use...
	I1014 15:01:32.533549   72173 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 15:01:32.549503   72173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 15:01:32.563126   72173 docker.go:217] disabling cri-docker service (if available) ...
	I1014 15:01:32.563184   72173 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 15:01:32.576972   72173 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 15:01:32.591047   72173 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 15:01:32.704839   72173 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 15:01:32.844770   72173 docker.go:233] disabling docker service ...
	I1014 15:01:32.844855   72173 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 15:01:32.859524   72173 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 15:01:32.872297   72173 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 15:01:33.014291   72173 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 15:01:33.136889   72173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 15:01:33.151656   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 15:01:33.170504   72173 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 15:01:33.170575   72173 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.180894   72173 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 15:01:33.180968   72173 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.192268   72173 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.203509   72173 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.215958   72173 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 15:01:33.227981   72173 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.241615   72173 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.261168   72173 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.273098   72173 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 15:01:33.284050   72173 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 15:01:33.284225   72173 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 15:01:33.299547   72173 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 15:01:33.310259   72173 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:01:33.426563   72173 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 15:01:33.526759   72173 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 15:01:33.526817   72173 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 15:01:33.532297   72173 start.go:563] Will wait 60s for crictl version
	I1014 15:01:33.532356   72173 ssh_runner.go:195] Run: which crictl
	I1014 15:01:33.536385   72173 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 15:01:33.576222   72173 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 15:01:33.576305   72173 ssh_runner.go:195] Run: crio --version
	I1014 15:01:33.604603   72173 ssh_runner.go:195] Run: crio --version
	I1014 15:01:33.636261   72173 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1014 15:01:33.637497   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetIP
	I1014 15:01:33.640450   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:33.640768   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:33.640806   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:33.641001   72173 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1014 15:01:33.645241   72173 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:01:33.658028   72173 kubeadm.go:883] updating cluster {Name:embed-certs-989166 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-989166 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 15:01:33.658181   72173 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 15:01:33.658246   72173 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:01:33.695188   72173 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1014 15:01:33.695261   72173 ssh_runner.go:195] Run: which lz4
	I1014 15:01:33.699735   72173 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 15:01:33.704540   72173 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 15:01:33.704576   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1014 15:01:32.260401   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Start
	I1014 15:01:32.260569   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Ensuring networks are active...
	I1014 15:01:32.261176   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Ensuring network default is active
	I1014 15:01:32.261498   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Ensuring network mk-default-k8s-diff-port-201291 is active
	I1014 15:01:32.261795   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Getting domain xml...
	I1014 15:01:32.262414   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Creating domain...
	I1014 15:01:33.520115   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting to get IP...
	I1014 15:01:33.521127   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:33.521518   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:33.521609   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:33.521520   73405 retry.go:31] will retry after 278.409801ms: waiting for machine to come up
	I1014 15:01:33.802289   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:33.802720   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:33.802744   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:33.802688   73405 retry.go:31] will retry after 362.923826ms: waiting for machine to come up
	I1014 15:01:34.167836   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:34.168228   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:34.168273   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:34.168163   73405 retry.go:31] will retry after 315.156371ms: waiting for machine to come up
	I1014 15:01:34.485445   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:34.485855   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:34.485876   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:34.485840   73405 retry.go:31] will retry after 573.46626ms: waiting for machine to come up
	I1014 15:01:35.061472   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:35.061997   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:35.062027   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:35.061965   73405 retry.go:31] will retry after 519.420022ms: waiting for machine to come up
	I1014 15:01:35.582694   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:35.583130   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:35.583155   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:35.583062   73405 retry.go:31] will retry after 661.055324ms: waiting for machine to come up
	I1014 15:01:36.245525   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:36.245876   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:36.245902   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:36.245834   73405 retry.go:31] will retry after 870.411428ms: waiting for machine to come up
	I1014 15:01:35.120269   72173 crio.go:462] duration metric: took 1.42058504s to copy over tarball
	I1014 15:01:35.120372   72173 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 15:01:37.206126   72173 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.08572724s)
	I1014 15:01:37.206168   72173 crio.go:469] duration metric: took 2.085859852s to extract the tarball
	I1014 15:01:37.206176   72173 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 15:01:37.243007   72173 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:01:37.289639   72173 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 15:01:37.289667   72173 cache_images.go:84] Images are preloaded, skipping loading
	I1014 15:01:37.289678   72173 kubeadm.go:934] updating node { 192.168.39.41 8443 v1.31.1 crio true true} ...
	I1014 15:01:37.289793   72173 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-989166 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.41
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-989166 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 15:01:37.289878   72173 ssh_runner.go:195] Run: crio config
	I1014 15:01:37.348641   72173 cni.go:84] Creating CNI manager for ""
	I1014 15:01:37.348665   72173 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:01:37.348684   72173 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 15:01:37.348711   72173 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.41 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-989166 NodeName:embed-certs-989166 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.41"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.41 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 15:01:37.348861   72173 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.41
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-989166"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.41"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.41"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 15:01:37.348925   72173 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 15:01:37.359204   72173 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 15:01:37.359272   72173 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 15:01:37.368810   72173 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1014 15:01:37.385402   72173 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 15:01:37.401828   72173 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1014 15:01:37.418811   72173 ssh_runner.go:195] Run: grep 192.168.39.41	control-plane.minikube.internal$ /etc/hosts
	I1014 15:01:37.422655   72173 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.41	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:01:37.434567   72173 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:01:37.561408   72173 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:01:37.579549   72173 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166 for IP: 192.168.39.41
	I1014 15:01:37.579577   72173 certs.go:194] generating shared ca certs ...
	I1014 15:01:37.579596   72173 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:01:37.579766   72173 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 15:01:37.579878   72173 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 15:01:37.579894   72173 certs.go:256] generating profile certs ...
	I1014 15:01:37.579998   72173 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/client.key
	I1014 15:01:37.580079   72173 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/apiserver.key.8939f8c2
	I1014 15:01:37.580148   72173 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/proxy-client.key
	I1014 15:01:37.580316   72173 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 15:01:37.580364   72173 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 15:01:37.580376   72173 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 15:01:37.580413   72173 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 15:01:37.580445   72173 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 15:01:37.580482   72173 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 15:01:37.580536   72173 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:01:37.581259   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 15:01:37.632130   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 15:01:37.678608   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 15:01:37.705377   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 15:01:37.731897   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1014 15:01:37.775043   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 15:01:37.801653   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 15:01:37.826547   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 15:01:37.852086   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 15:01:37.878715   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 15:01:37.905883   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 15:01:37.932458   72173 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 15:01:37.951362   72173 ssh_runner.go:195] Run: openssl version
	I1014 15:01:37.957730   72173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 15:01:37.969936   72173 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:01:37.974871   72173 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:01:37.974931   72173 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:01:37.981060   72173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 15:01:37.992086   72173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 15:01:38.003528   72173 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 15:01:38.008267   72173 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 15:01:38.008332   72173 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 15:01:38.014243   72173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 15:01:38.025272   72173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 15:01:38.036191   72173 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 15:01:38.040751   72173 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 15:01:38.040804   72173 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 15:01:38.046605   72173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 15:01:38.057815   72173 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 15:01:38.062497   72173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 15:01:38.068889   72173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 15:01:38.075278   72173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 15:01:38.081663   72173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 15:01:38.087892   72173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 15:01:38.093748   72173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 15:01:38.099807   72173 kubeadm.go:392] StartCluster: {Name:embed-certs-989166 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-989166 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 15:01:38.099912   72173 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 15:01:38.099960   72173 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:01:38.140896   72173 cri.go:89] found id: ""
	I1014 15:01:38.140973   72173 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 15:01:38.151443   72173 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1014 15:01:38.151462   72173 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1014 15:01:38.151512   72173 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 15:01:38.161419   72173 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 15:01:38.162357   72173 kubeconfig.go:125] found "embed-certs-989166" server: "https://192.168.39.41:8443"
	I1014 15:01:38.164328   72173 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 15:01:38.174731   72173 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.41
	I1014 15:01:38.174767   72173 kubeadm.go:1160] stopping kube-system containers ...
	I1014 15:01:38.174782   72173 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1014 15:01:38.174849   72173 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:01:38.214903   72173 cri.go:89] found id: ""
	I1014 15:01:38.214982   72173 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1014 15:01:38.232891   72173 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:01:38.242711   72173 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:01:38.242735   72173 kubeadm.go:157] found existing configuration files:
	
	I1014 15:01:38.242793   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:01:38.251939   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:01:38.252019   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:01:38.262108   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:01:38.271688   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:01:38.271751   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:01:38.281420   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:01:38.290693   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:01:38.290752   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:01:38.300205   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:01:38.309174   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:01:38.309236   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:01:38.318616   72173 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:01:38.328337   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:38.436297   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:37.118307   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:37.118744   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:37.118784   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:37.118706   73405 retry.go:31] will retry after 1.481454557s: waiting for machine to come up
	I1014 15:01:38.601780   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:38.602168   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:38.602212   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:38.602118   73405 retry.go:31] will retry after 1.22705177s: waiting for machine to come up
	I1014 15:01:39.831413   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:39.831889   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:39.831963   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:39.831838   73405 retry.go:31] will retry after 1.898722681s: waiting for machine to come up
	I1014 15:01:39.574410   72173 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.138075676s)
	I1014 15:01:39.574444   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:39.789417   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:39.873563   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:40.011579   72173 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:01:40.011673   72173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:01:40.511877   72173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:01:41.012608   72173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:01:41.512235   72173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:01:42.012435   72173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:01:42.047878   72173 api_server.go:72] duration metric: took 2.036298602s to wait for apiserver process to appear ...
	I1014 15:01:42.047909   72173 api_server.go:88] waiting for apiserver healthz status ...
	I1014 15:01:42.047935   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:01:44.298692   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 15:01:44.298726   72173 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 15:01:44.298743   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:01:44.317315   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 15:01:44.317353   72173 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 15:01:44.548651   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:01:44.559477   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:01:44.559513   72173 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:01:45.048060   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:01:45.057070   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:01:45.057099   72173 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:01:45.548344   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:01:45.552611   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:01:45.552640   72173 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:01:46.048314   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:01:46.054943   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 200:
	ok
	I1014 15:01:46.062740   72173 api_server.go:141] control plane version: v1.31.1
	I1014 15:01:46.062769   72173 api_server.go:131] duration metric: took 4.014851988s to wait for apiserver health ...
	I1014 15:01:46.062779   72173 cni.go:84] Creating CNI manager for ""
	I1014 15:01:46.062785   72173 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:01:46.064824   72173 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 15:01:41.731928   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:41.732483   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:41.732515   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:41.732435   73405 retry.go:31] will retry after 2.349662063s: waiting for machine to come up
	I1014 15:01:44.083975   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:44.084492   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:44.084523   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:44.084437   73405 retry.go:31] will retry after 3.472214726s: waiting for machine to come up
	I1014 15:01:46.066505   72173 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 15:01:46.092975   72173 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 15:01:46.123873   72173 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 15:01:46.142575   72173 system_pods.go:59] 8 kube-system pods found
	I1014 15:01:46.142636   72173 system_pods.go:61] "coredns-7c65d6cfc9-r8x9s" [5a00095c-8777-412a-a7af-319a03d6153e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 15:01:46.142647   72173 system_pods.go:61] "etcd-embed-certs-989166" [981d2f54-f128-4527-a7cb-a6b9c647740b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 15:01:46.142658   72173 system_pods.go:61] "kube-apiserver-embed-certs-989166" [31780b5a-6ebf-4c75-bd27-64a95193827f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 15:01:46.142668   72173 system_pods.go:61] "kube-controller-manager-embed-certs-989166" [345e7656-579a-4be9-bcf0-4117880a2988] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 15:01:46.142678   72173 system_pods.go:61] "kube-proxy-7p84k" [5d8243a8-7247-490f-9102-61008a614a67] Running
	I1014 15:01:46.142685   72173 system_pods.go:61] "kube-scheduler-embed-certs-989166" [53b4b4a4-74ec-485e-99e3-b53c2edc80ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 15:01:46.142695   72173 system_pods.go:61] "metrics-server-6867b74b74-zc8zh" [5abf22c7-d271-4c3a-8e0e-cd867142cee1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:01:46.142703   72173 system_pods.go:61] "storage-provisioner" [6860efa4-c72f-477f-b9e1-e90ddcd112b5] Running
	I1014 15:01:46.142711   72173 system_pods.go:74] duration metric: took 18.811157ms to wait for pod list to return data ...
	I1014 15:01:46.142722   72173 node_conditions.go:102] verifying NodePressure condition ...
	I1014 15:01:46.154420   72173 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 15:01:46.154449   72173 node_conditions.go:123] node cpu capacity is 2
	I1014 15:01:46.154463   72173 node_conditions.go:105] duration metric: took 11.735142ms to run NodePressure ...
	I1014 15:01:46.154483   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:46.417106   72173 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1014 15:01:46.422102   72173 kubeadm.go:739] kubelet initialised
	I1014 15:01:46.422127   72173 kubeadm.go:740] duration metric: took 4.991248ms waiting for restarted kubelet to initialise ...
	I1014 15:01:46.422135   72173 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:01:46.428014   72173 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-r8x9s" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:46.432946   72173 pod_ready.go:98] node "embed-certs-989166" hosting pod "coredns-7c65d6cfc9-r8x9s" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.432965   72173 pod_ready.go:82] duration metric: took 4.927935ms for pod "coredns-7c65d6cfc9-r8x9s" in "kube-system" namespace to be "Ready" ...
	E1014 15:01:46.432972   72173 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-989166" hosting pod "coredns-7c65d6cfc9-r8x9s" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.432979   72173 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:46.441849   72173 pod_ready.go:98] node "embed-certs-989166" hosting pod "etcd-embed-certs-989166" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.441868   72173 pod_ready.go:82] duration metric: took 8.882863ms for pod "etcd-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	E1014 15:01:46.441877   72173 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-989166" hosting pod "etcd-embed-certs-989166" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.441883   72173 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:46.446863   72173 pod_ready.go:98] node "embed-certs-989166" hosting pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.446891   72173 pod_ready.go:82] duration metric: took 4.997658ms for pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	E1014 15:01:46.446912   72173 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-989166" hosting pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.446922   72173 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:46.526949   72173 pod_ready.go:98] node "embed-certs-989166" hosting pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.526972   72173 pod_ready.go:82] duration metric: took 80.035898ms for pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	E1014 15:01:46.526981   72173 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-989166" hosting pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.526987   72173 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-7p84k" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:46.927217   72173 pod_ready.go:93] pod "kube-proxy-7p84k" in "kube-system" namespace has status "Ready":"True"
	I1014 15:01:46.927249   72173 pod_ready.go:82] duration metric: took 400.252417ms for pod "kube-proxy-7p84k" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:46.927263   72173 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:48.933034   72173 pod_ready.go:103] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"False"
	I1014 15:01:47.558671   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:47.559112   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:47.559143   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:47.559067   73405 retry.go:31] will retry after 3.421253013s: waiting for machine to come up
	I1014 15:01:50.981602   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:50.982143   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has current primary IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:50.982167   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Found IP for machine: 192.168.50.128
	I1014 15:01:50.982186   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Reserving static IP address...
	I1014 15:01:50.982682   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-201291", mac: "52:54:00:23:03:c4", ip: "192.168.50.128"} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:50.982703   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Reserved static IP address: 192.168.50.128
	I1014 15:01:50.982722   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | skip adding static IP to network mk-default-k8s-diff-port-201291 - found existing host DHCP lease matching {name: "default-k8s-diff-port-201291", mac: "52:54:00:23:03:c4", ip: "192.168.50.128"}
	I1014 15:01:50.982743   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Getting to WaitForSSH function...
	I1014 15:01:50.982781   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for SSH to be available...
	I1014 15:01:50.985084   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:50.985609   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:50.985640   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:50.985750   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Using SSH client type: external
	I1014 15:01:50.985778   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa (-rw-------)
	I1014 15:01:50.985814   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.128 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 15:01:50.985832   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | About to run SSH command:
	I1014 15:01:50.985849   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | exit 0
	I1014 15:01:51.123927   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | SSH cmd err, output: <nil>: 
	I1014 15:01:51.124457   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetConfigRaw
	I1014 15:01:51.125106   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetIP
	I1014 15:01:51.128286   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.128716   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.128770   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.129045   72390 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/config.json ...
	I1014 15:01:51.129283   72390 machine.go:93] provisionDockerMachine start ...
	I1014 15:01:51.129308   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:51.129551   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.131756   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.132164   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.132207   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.132488   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:51.132701   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.132873   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.133022   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:51.133181   72390 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:51.133421   72390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I1014 15:01:51.133436   72390 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 15:01:51.244659   72390 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 15:01:51.244691   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetMachineName
	I1014 15:01:51.244923   72390 buildroot.go:166] provisioning hostname "default-k8s-diff-port-201291"
	I1014 15:01:51.244953   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetMachineName
	I1014 15:01:51.245149   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.248061   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.248429   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.248463   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.248521   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:51.248697   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.248887   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.249034   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:51.249227   72390 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:51.249448   72390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I1014 15:01:51.249463   72390 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-201291 && echo "default-k8s-diff-port-201291" | sudo tee /etc/hostname
	I1014 15:01:51.373260   72390 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-201291
	
	I1014 15:01:51.373293   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.376195   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.376528   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.376549   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.376752   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:51.376962   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.377159   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.377296   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:51.377446   72390 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:51.377657   72390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I1014 15:01:51.377676   72390 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-201291' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-201291/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-201291' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 15:01:52.179441   72639 start.go:364] duration metric: took 3m34.072351032s to acquireMachinesLock for "old-k8s-version-399767"
	I1014 15:01:52.179497   72639 start.go:96] Skipping create...Using existing machine configuration
	I1014 15:01:52.179505   72639 fix.go:54] fixHost starting: 
	I1014 15:01:52.179834   72639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:01:52.179873   72639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:01:52.196724   72639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39389
	I1014 15:01:52.197171   72639 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:01:52.197649   72639 main.go:141] libmachine: Using API Version  1
	I1014 15:01:52.197673   72639 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:01:52.198010   72639 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:01:52.198191   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:01:52.198337   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetState
	I1014 15:01:52.199789   72639 fix.go:112] recreateIfNeeded on old-k8s-version-399767: state=Stopped err=<nil>
	I1014 15:01:52.199826   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	W1014 15:01:52.199998   72639 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 15:01:52.202220   72639 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-399767" ...
	I1014 15:01:52.203601   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .Start
	I1014 15:01:52.203771   72639 main.go:141] libmachine: (old-k8s-version-399767) Ensuring networks are active...
	I1014 15:01:52.204575   72639 main.go:141] libmachine: (old-k8s-version-399767) Ensuring network default is active
	I1014 15:01:52.204971   72639 main.go:141] libmachine: (old-k8s-version-399767) Ensuring network mk-old-k8s-version-399767 is active
	I1014 15:01:52.205326   72639 main.go:141] libmachine: (old-k8s-version-399767) Getting domain xml...
	I1014 15:01:52.206026   72639 main.go:141] libmachine: (old-k8s-version-399767) Creating domain...
	I1014 15:01:51.488446   72390 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 15:01:51.488486   72390 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 15:01:51.488535   72390 buildroot.go:174] setting up certificates
	I1014 15:01:51.488553   72390 provision.go:84] configureAuth start
	I1014 15:01:51.488570   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetMachineName
	I1014 15:01:51.488867   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetIP
	I1014 15:01:51.491749   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.492141   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.492171   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.492351   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.494197   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.494498   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.494524   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.494693   72390 provision.go:143] copyHostCerts
	I1014 15:01:51.494745   72390 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 15:01:51.494764   72390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 15:01:51.494834   72390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 15:01:51.494945   72390 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 15:01:51.494958   72390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 15:01:51.494992   72390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 15:01:51.495081   72390 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 15:01:51.495095   72390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 15:01:51.495122   72390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 15:01:51.495214   72390 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-201291 san=[127.0.0.1 192.168.50.128 default-k8s-diff-port-201291 localhost minikube]
	I1014 15:01:51.567041   72390 provision.go:177] copyRemoteCerts
	I1014 15:01:51.567098   72390 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 15:01:51.567121   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.570006   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.570340   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.570368   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.570562   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:51.570769   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.570941   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:51.571047   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:01:51.652956   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 15:01:51.677959   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1014 15:01:51.702009   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 15:01:51.727016   72390 provision.go:87] duration metric: took 238.449189ms to configureAuth
	I1014 15:01:51.727043   72390 buildroot.go:189] setting minikube options for container-runtime
	I1014 15:01:51.727207   72390 config.go:182] Loaded profile config "default-k8s-diff-port-201291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 15:01:51.727276   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.729742   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.730043   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.730065   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.730242   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:51.730418   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.730578   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.730735   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:51.730891   72390 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:51.731097   72390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I1014 15:01:51.731114   72390 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 15:01:51.942847   72390 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 15:01:51.942874   72390 machine.go:96] duration metric: took 813.575194ms to provisionDockerMachine
	I1014 15:01:51.942888   72390 start.go:293] postStartSetup for "default-k8s-diff-port-201291" (driver="kvm2")
	I1014 15:01:51.942903   72390 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 15:01:51.942926   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:51.943250   72390 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 15:01:51.943283   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.946246   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.946608   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.946638   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.946799   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:51.946984   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.947165   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:51.947293   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:01:52.030124   72390 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 15:01:52.034493   72390 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 15:01:52.034525   72390 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 15:01:52.034625   72390 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 15:01:52.034740   72390 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 15:01:52.034834   72390 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 15:01:52.044919   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:01:52.068326   72390 start.go:296] duration metric: took 125.426221ms for postStartSetup
	I1014 15:01:52.068370   72390 fix.go:56] duration metric: took 19.832650283s for fixHost
	I1014 15:01:52.068394   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:52.070949   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.071362   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:52.071388   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.071588   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:52.071788   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:52.071908   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:52.072065   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:52.072231   72390 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:52.072449   72390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I1014 15:01:52.072468   72390 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 15:01:52.179264   72390 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728918112.149610573
	
	I1014 15:01:52.179291   72390 fix.go:216] guest clock: 1728918112.149610573
	I1014 15:01:52.179301   72390 fix.go:229] Guest: 2024-10-14 15:01:52.149610573 +0000 UTC Remote: 2024-10-14 15:01:52.06837553 +0000 UTC m=+235.685992564 (delta=81.235043ms)
	I1014 15:01:52.179349   72390 fix.go:200] guest clock delta is within tolerance: 81.235043ms
	I1014 15:01:52.179354   72390 start.go:83] releasing machines lock for "default-k8s-diff-port-201291", held for 19.943664398s
	I1014 15:01:52.179387   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:52.179666   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetIP
	I1014 15:01:52.182457   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.182834   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:52.182861   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.183000   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:52.183598   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:52.183784   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:52.183883   72390 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 15:01:52.183928   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:52.183993   72390 ssh_runner.go:195] Run: cat /version.json
	I1014 15:01:52.184017   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:52.186499   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.186692   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.186890   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:52.186915   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.187021   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:52.187050   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.187086   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:52.187288   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:52.187331   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:52.187479   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:52.187485   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:52.187597   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:01:52.187688   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:52.187843   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:01:52.264102   72390 ssh_runner.go:195] Run: systemctl --version
	I1014 15:01:52.291233   72390 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 15:01:52.443318   72390 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 15:01:52.450321   72390 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 15:01:52.450400   72390 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 15:01:52.467949   72390 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 15:01:52.467975   72390 start.go:495] detecting cgroup driver to use...
	I1014 15:01:52.468039   72390 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 15:01:52.485758   72390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 15:01:52.500662   72390 docker.go:217] disabling cri-docker service (if available) ...
	I1014 15:01:52.500729   72390 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 15:01:52.520846   72390 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 15:01:52.535606   72390 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 15:01:52.671062   72390 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 15:01:52.845631   72390 docker.go:233] disabling docker service ...
	I1014 15:01:52.845694   72390 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 15:01:52.867403   72390 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 15:01:52.882344   72390 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 15:01:53.020570   72390 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 15:01:53.157941   72390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 15:01:53.174989   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 15:01:53.195729   72390 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 15:01:53.195799   72390 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.207613   72390 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 15:01:53.207671   72390 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.218838   72390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.231186   72390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.247521   72390 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 15:01:53.258128   72390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.269119   72390 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.287810   72390 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.298576   72390 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 15:01:53.308114   72390 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 15:01:53.308169   72390 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 15:01:53.322207   72390 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 15:01:53.332284   72390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:01:53.483702   72390 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 15:01:53.581260   72390 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 15:01:53.581341   72390 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 15:01:53.586042   72390 start.go:563] Will wait 60s for crictl version
	I1014 15:01:53.586105   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:01:53.589931   72390 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 15:01:53.634776   72390 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 15:01:53.634864   72390 ssh_runner.go:195] Run: crio --version
	I1014 15:01:53.664242   72390 ssh_runner.go:195] Run: crio --version
	I1014 15:01:53.698374   72390 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1014 15:01:50.933590   72173 pod_ready.go:103] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"False"
	I1014 15:01:52.935445   72173 pod_ready.go:103] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"False"
	I1014 15:01:53.699730   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetIP
	I1014 15:01:53.702837   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:53.703224   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:53.703245   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:53.703528   72390 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1014 15:01:53.707720   72390 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:01:53.721953   72390 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-201291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-201291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.128 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 15:01:53.722106   72390 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 15:01:53.722165   72390 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:01:53.779083   72390 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1014 15:01:53.779139   72390 ssh_runner.go:195] Run: which lz4
	I1014 15:01:53.783197   72390 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 15:01:53.787515   72390 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 15:01:53.787549   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1014 15:01:55.277150   72390 crio.go:462] duration metric: took 1.493980352s to copy over tarball
	I1014 15:01:55.277212   72390 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 15:01:53.506315   72639 main.go:141] libmachine: (old-k8s-version-399767) Waiting to get IP...
	I1014 15:01:53.507576   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:53.508228   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:53.508297   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:53.508202   73581 retry.go:31] will retry after 220.59125ms: waiting for machine to come up
	I1014 15:01:53.730853   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:53.731286   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:53.731339   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:53.731257   73581 retry.go:31] will retry after 321.559387ms: waiting for machine to come up
	I1014 15:01:54.054891   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:54.055482   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:54.055509   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:54.055443   73581 retry.go:31] will retry after 444.912998ms: waiting for machine to come up
	I1014 15:01:54.502125   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:54.502479   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:54.502525   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:54.502462   73581 retry.go:31] will retry after 600.214254ms: waiting for machine to come up
	I1014 15:01:55.104962   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:55.105479   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:55.105504   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:55.105425   73581 retry.go:31] will retry after 686.77698ms: waiting for machine to come up
	I1014 15:01:55.794125   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:55.794825   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:55.794871   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:55.794717   73581 retry.go:31] will retry after 926.146146ms: waiting for machine to come up
	I1014 15:01:56.722712   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:56.723153   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:56.723183   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:56.723112   73581 retry.go:31] will retry after 1.108272037s: waiting for machine to come up
	I1014 15:01:57.832729   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:57.833304   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:57.833356   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:57.833279   73581 retry.go:31] will retry after 1.442737664s: waiting for machine to come up
	I1014 15:01:55.435691   72173 pod_ready.go:103] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"False"
	I1014 15:01:57.933561   72173 pod_ready.go:103] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"False"
	I1014 15:01:57.424526   72390 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.147277316s)
	I1014 15:01:57.424559   72390 crio.go:469] duration metric: took 2.147385522s to extract the tarball
	I1014 15:01:57.424566   72390 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 15:01:57.461792   72390 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:01:57.504424   72390 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 15:01:57.504450   72390 cache_images.go:84] Images are preloaded, skipping loading
	I1014 15:01:57.504460   72390 kubeadm.go:934] updating node { 192.168.50.128 8444 v1.31.1 crio true true} ...
	I1014 15:01:57.504656   72390 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-201291 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-201291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 15:01:57.504759   72390 ssh_runner.go:195] Run: crio config
	I1014 15:01:57.555431   72390 cni.go:84] Creating CNI manager for ""
	I1014 15:01:57.555453   72390 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:01:57.555462   72390 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 15:01:57.555482   72390 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.128 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-201291 NodeName:default-k8s-diff-port-201291 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.128"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.128 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 15:01:57.555593   72390 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.128
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-201291"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.128"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.128"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 15:01:57.555652   72390 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 15:01:57.565953   72390 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 15:01:57.566025   72390 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 15:01:57.576141   72390 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1014 15:01:57.594855   72390 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 15:01:57.611249   72390 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1014 15:01:57.628363   72390 ssh_runner.go:195] Run: grep 192.168.50.128	control-plane.minikube.internal$ /etc/hosts
	I1014 15:01:57.632552   72390 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.128	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:01:57.645588   72390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:01:57.769192   72390 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:01:57.787654   72390 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291 for IP: 192.168.50.128
	I1014 15:01:57.787677   72390 certs.go:194] generating shared ca certs ...
	I1014 15:01:57.787695   72390 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:01:57.787865   72390 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 15:01:57.787916   72390 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 15:01:57.787930   72390 certs.go:256] generating profile certs ...
	I1014 15:01:57.788084   72390 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/client.key
	I1014 15:01:57.788174   72390 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/apiserver.key.517dfce8
	I1014 15:01:57.788223   72390 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/proxy-client.key
	I1014 15:01:57.788371   72390 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 15:01:57.788407   72390 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 15:01:57.788417   72390 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 15:01:57.788439   72390 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 15:01:57.788460   72390 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 15:01:57.788482   72390 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 15:01:57.788521   72390 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:01:57.789141   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 15:01:57.821159   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 15:01:57.875530   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 15:01:57.902687   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 15:01:57.935658   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1014 15:01:57.961987   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 15:01:57.987107   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 15:01:58.013544   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 15:01:58.039793   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 15:01:58.071154   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 15:01:58.102574   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 15:01:58.127398   72390 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 15:01:58.144906   72390 ssh_runner.go:195] Run: openssl version
	I1014 15:01:58.150817   72390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 15:01:58.162122   72390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 15:01:58.167170   72390 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 15:01:58.167240   72390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 15:01:58.173692   72390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 15:01:58.185769   72390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 15:01:58.197045   72390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:01:58.201652   72390 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:01:58.201716   72390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:01:58.207559   72390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 15:01:58.218921   72390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 15:01:58.230822   72390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 15:01:58.235774   72390 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 15:01:58.235832   72390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 15:01:58.241546   72390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 15:01:58.252618   72390 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 15:01:58.257509   72390 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 15:01:58.263891   72390 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 15:01:58.270085   72390 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 15:01:58.276427   72390 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 15:01:58.282346   72390 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 15:01:58.288396   72390 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 15:01:58.294386   72390 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-201291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-201291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.128 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 15:01:58.294472   72390 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 15:01:58.294517   72390 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:01:58.342008   72390 cri.go:89] found id: ""
	I1014 15:01:58.342088   72390 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 15:01:58.352478   72390 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1014 15:01:58.352512   72390 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1014 15:01:58.352566   72390 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 15:01:58.363158   72390 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 15:01:58.364106   72390 kubeconfig.go:125] found "default-k8s-diff-port-201291" server: "https://192.168.50.128:8444"
	I1014 15:01:58.366079   72390 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 15:01:58.375635   72390 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.128
	I1014 15:01:58.375666   72390 kubeadm.go:1160] stopping kube-system containers ...
	I1014 15:01:58.375680   72390 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1014 15:01:58.375733   72390 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:01:58.411846   72390 cri.go:89] found id: ""
	I1014 15:01:58.411923   72390 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1014 15:01:58.428602   72390 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:01:58.439214   72390 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:01:58.439239   72390 kubeadm.go:157] found existing configuration files:
	
	I1014 15:01:58.439293   72390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1014 15:01:58.448475   72390 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:01:58.448528   72390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:01:58.457816   72390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1014 15:01:58.467279   72390 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:01:58.467352   72390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:01:58.477479   72390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1014 15:01:58.487899   72390 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:01:58.487968   72390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:01:58.498296   72390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1014 15:01:58.507910   72390 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:01:58.507977   72390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:01:58.517901   72390 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:01:58.527983   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:58.654226   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:59.576099   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:59.790552   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:59.879043   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:59.963369   72390 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:01:59.963462   72390 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:00.464403   72390 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:00.963891   72390 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:00.994849   72390 api_server.go:72] duration metric: took 1.031477803s to wait for apiserver process to appear ...
	I1014 15:02:00.994875   72390 api_server.go:88] waiting for apiserver healthz status ...
	I1014 15:02:00.994897   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:01:59.278031   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:59.278558   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:59.278586   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:59.278519   73581 retry.go:31] will retry after 1.187069828s: waiting for machine to come up
	I1014 15:02:00.467810   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:00.468237   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:02:00.468267   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:02:00.468195   73581 retry.go:31] will retry after 1.667312665s: waiting for machine to come up
	I1014 15:02:02.137067   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:02.137569   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:02:02.137590   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:02:02.137530   73581 retry.go:31] will retry after 1.910892221s: waiting for machine to come up
	I1014 15:01:59.994818   72173 pod_ready.go:103] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:00.130085   72173 pod_ready.go:93] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:00.130109   72173 pod_ready.go:82] duration metric: took 13.202838085s for pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:00.130121   72173 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:02.142821   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:03.649728   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 15:02:03.649764   72390 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 15:02:03.649780   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:02:03.754772   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:03.754805   72390 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:02:03.995106   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:02:04.020015   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:04.020040   72390 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:02:04.495270   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:02:04.501643   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:04.501694   72390 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:02:04.995049   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:02:05.002865   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:05.002893   72390 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:02:05.495412   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:02:05.499936   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 200:
	ok
	I1014 15:02:05.506656   72390 api_server.go:141] control plane version: v1.31.1
	I1014 15:02:05.506685   72390 api_server.go:131] duration metric: took 4.511803211s to wait for apiserver health ...
	I1014 15:02:05.506694   72390 cni.go:84] Creating CNI manager for ""
	I1014 15:02:05.506700   72390 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:02:05.508420   72390 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 15:02:05.509685   72390 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 15:02:05.521314   72390 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 15:02:05.543021   72390 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 15:02:05.553508   72390 system_pods.go:59] 8 kube-system pods found
	I1014 15:02:05.553539   72390 system_pods.go:61] "coredns-7c65d6cfc9-994hx" [b0291ce4-5503-4bb1-8e36-d956b115c3ac] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 15:02:05.553548   72390 system_pods.go:61] "etcd-default-k8s-diff-port-201291" [5e359915-fb2e-46d5-a1a8-826341943fc3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 15:02:05.553555   72390 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-201291" [047bd813-aaab-428e-ab47-12932195c91f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 15:02:05.553562   72390 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-201291" [6eb0eb91-21ce-4e56-9758-fbd453b0d4df] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 15:02:05.553567   72390 system_pods.go:61] "kube-proxy-rh82t" [1dcd3c39-1bfe-40ac-a012-ea17ea1dfb6d] Running
	I1014 15:02:05.553572   72390 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-201291" [aaeefd23-6adc-4c69-acca-38e3f3172b2e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 15:02:05.553577   72390 system_pods.go:61] "metrics-server-6867b74b74-bcrqs" [508697cd-cf31-4078-8985-5c0b77966695] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:02:05.553581   72390 system_pods.go:61] "storage-provisioner" [62925b5e-ec1d-4d5b-aa70-a4fc555db52d] Running
	I1014 15:02:05.553587   72390 system_pods.go:74] duration metric: took 10.544168ms to wait for pod list to return data ...
	I1014 15:02:05.553593   72390 node_conditions.go:102] verifying NodePressure condition ...
	I1014 15:02:05.558889   72390 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 15:02:05.558917   72390 node_conditions.go:123] node cpu capacity is 2
	I1014 15:02:05.558929   72390 node_conditions.go:105] duration metric: took 5.331009ms to run NodePressure ...
	I1014 15:02:05.558948   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:05.819037   72390 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1014 15:02:05.826431   72390 kubeadm.go:739] kubelet initialised
	I1014 15:02:05.826456   72390 kubeadm.go:740] duration metric: took 7.391664ms waiting for restarted kubelet to initialise ...
	I1014 15:02:05.826463   72390 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:02:05.833547   72390 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:05.840150   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.840175   72390 pod_ready.go:82] duration metric: took 6.599969ms for pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:05.840186   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.840205   72390 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:05.850319   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.850346   72390 pod_ready.go:82] duration metric: took 10.130163ms for pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:05.850359   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.850368   72390 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:05.857192   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.857215   72390 pod_ready.go:82] duration metric: took 6.838793ms for pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:05.857228   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.857237   72390 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:05.946611   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.946646   72390 pod_ready.go:82] duration metric: took 89.397304ms for pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:05.946663   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.946674   72390 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rh82t" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:06.346368   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "kube-proxy-rh82t" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:06.346400   72390 pod_ready.go:82] duration metric: took 399.71513ms for pod "kube-proxy-rh82t" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:06.346413   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "kube-proxy-rh82t" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:06.346423   72390 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:06.746899   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:06.746928   72390 pod_ready.go:82] duration metric: took 400.494872ms for pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:06.746941   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:06.746951   72390 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:07.146147   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:07.146175   72390 pod_ready.go:82] duration metric: took 399.215075ms for pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:07.146199   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:07.146215   72390 pod_ready.go:39] duration metric: took 1.319742206s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:02:07.146237   72390 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 15:02:07.158049   72390 ops.go:34] apiserver oom_adj: -16
	I1014 15:02:07.158072   72390 kubeadm.go:597] duration metric: took 8.805549392s to restartPrimaryControlPlane
	I1014 15:02:07.158082   72390 kubeadm.go:394] duration metric: took 8.863707122s to StartCluster
	I1014 15:02:07.158102   72390 settings.go:142] acquiring lock: {Name:mk9f6f6b9dc8c3435472077cbb9091b8e648d1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:02:07.158192   72390 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 15:02:07.159622   72390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/kubeconfig: {Name:mk17dd47b52fd7a8ee17f563c35b22ef1b7788f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:02:07.159917   72390 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.128 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 15:02:07.159968   72390 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 15:02:07.160052   72390 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-201291"
	I1014 15:02:07.160074   72390 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-201291"
	W1014 15:02:07.160086   72390 addons.go:243] addon storage-provisioner should already be in state true
	I1014 15:02:07.160125   72390 host.go:66] Checking if "default-k8s-diff-port-201291" exists ...
	I1014 15:02:07.160133   72390 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-201291"
	I1014 15:02:07.160166   72390 config.go:182] Loaded profile config "default-k8s-diff-port-201291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 15:02:07.160181   72390 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-201291"
	I1014 15:02:07.160179   72390 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-201291"
	I1014 15:02:07.160228   72390 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-201291"
	W1014 15:02:07.160251   72390 addons.go:243] addon metrics-server should already be in state true
	I1014 15:02:07.160312   72390 host.go:66] Checking if "default-k8s-diff-port-201291" exists ...
	I1014 15:02:07.160472   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.160508   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.160692   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.160712   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.160729   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.160770   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.161892   72390 out.go:177] * Verifying Kubernetes components...
	I1014 15:02:07.163368   72390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:02:07.176101   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36801
	I1014 15:02:07.176351   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44737
	I1014 15:02:07.176705   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.176834   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.177272   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.177298   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.177392   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.177413   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.177600   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43091
	I1014 15:02:07.177639   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.177703   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.178070   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.178181   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.178244   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.178252   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.178285   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.178566   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.178590   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.178944   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.179107   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetState
	I1014 15:02:07.181971   72390 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-201291"
	W1014 15:02:07.181989   72390 addons.go:243] addon default-storageclass should already be in state true
	I1014 15:02:07.182024   72390 host.go:66] Checking if "default-k8s-diff-port-201291" exists ...
	I1014 15:02:07.182278   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.182322   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.194707   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36383
	I1014 15:02:07.195401   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.196015   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.196043   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.196413   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.196511   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35479
	I1014 15:02:07.196618   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetState
	I1014 15:02:07.196977   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.197479   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.197497   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.197520   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41695
	I1014 15:02:07.197848   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.197981   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.198048   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetState
	I1014 15:02:07.198544   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.198567   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.198636   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:02:07.199017   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.199817   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.199824   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:02:07.199864   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.200860   72390 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:07.201674   72390 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1014 15:02:04.050521   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:04.051060   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:02:04.051099   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:02:04.051015   73581 retry.go:31] will retry after 2.29433775s: waiting for machine to come up
	I1014 15:02:06.347519   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:06.347985   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:02:06.348004   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:02:06.347945   73581 retry.go:31] will retry after 3.499922823s: waiting for machine to come up
	I1014 15:02:07.202461   72390 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 15:02:07.202476   72390 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 15:02:07.202491   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:02:07.203259   72390 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1014 15:02:07.203275   72390 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1014 15:02:07.203292   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:02:07.205760   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:02:07.206124   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:02:07.206150   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:02:07.206375   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:02:07.206533   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:02:07.206676   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:02:07.206729   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:02:07.206858   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:02:07.207134   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:02:07.207150   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:02:07.207248   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:02:07.207455   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:02:07.207559   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:02:07.207677   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:02:07.219554   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38833
	I1014 15:02:07.220070   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.220483   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.220508   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.220842   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.221004   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetState
	I1014 15:02:07.222706   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:02:07.222961   72390 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 15:02:07.222979   72390 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 15:02:07.222997   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:02:07.225715   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:02:07.226209   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:02:07.226250   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:02:07.226551   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:02:07.226964   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:02:07.227118   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:02:07.227254   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:02:07.362105   72390 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:02:07.384279   72390 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-201291" to be "Ready" ...
	I1014 15:02:07.438536   72390 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 15:02:07.551868   72390 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1014 15:02:07.551897   72390 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1014 15:02:07.606347   72390 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 15:02:07.656287   72390 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1014 15:02:07.656313   72390 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1014 15:02:07.687002   72390 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 15:02:07.687027   72390 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1014 15:02:07.751715   72390 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 15:02:07.810869   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:07.810902   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:07.811193   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Closing plugin on server side
	I1014 15:02:07.811247   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:07.811262   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:07.811273   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:07.811281   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:07.811546   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:07.811562   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:07.811576   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Closing plugin on server side
	I1014 15:02:07.819897   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:07.819917   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:07.820156   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:07.820206   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:07.820179   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Closing plugin on server side
	I1014 15:02:08.581553   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:08.581583   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:08.581902   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Closing plugin on server side
	I1014 15:02:08.581943   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:08.581955   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:08.581974   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:08.581986   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:08.582197   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:08.582211   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:08.595214   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:08.595242   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:08.595493   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Closing plugin on server side
	I1014 15:02:08.595569   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:08.595589   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:08.595609   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:08.595623   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:08.595833   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:08.595847   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:08.595864   72390 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-201291"
	I1014 15:02:08.597967   72390 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1014 15:02:04.638029   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:07.139428   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:11.248505   71679 start.go:364] duration metric: took 53.170862497s to acquireMachinesLock for "no-preload-813300"
	I1014 15:02:11.248567   71679 start.go:96] Skipping create...Using existing machine configuration
	I1014 15:02:11.248581   71679 fix.go:54] fixHost starting: 
	I1014 15:02:11.248978   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:11.249022   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:11.266270   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39251
	I1014 15:02:11.266780   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:11.267302   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:02:11.267319   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:11.267675   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:11.267842   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:11.267984   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetState
	I1014 15:02:11.269459   71679 fix.go:112] recreateIfNeeded on no-preload-813300: state=Stopped err=<nil>
	I1014 15:02:11.269484   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	W1014 15:02:11.269589   71679 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 15:02:11.271434   71679 out.go:177] * Restarting existing kvm2 VM for "no-preload-813300" ...
	I1014 15:02:08.599138   72390 addons.go:510] duration metric: took 1.439175047s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1014 15:02:09.388573   72390 node_ready.go:53] node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:09.851017   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.851562   72639 main.go:141] libmachine: (old-k8s-version-399767) Found IP for machine: 192.168.72.138
	I1014 15:02:09.851582   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has current primary IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.851587   72639 main.go:141] libmachine: (old-k8s-version-399767) Reserving static IP address...
	I1014 15:02:09.851961   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "old-k8s-version-399767", mac: "52:54:00:87:01:70", ip: "192.168.72.138"} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:09.851991   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | skip adding static IP to network mk-old-k8s-version-399767 - found existing host DHCP lease matching {name: "old-k8s-version-399767", mac: "52:54:00:87:01:70", ip: "192.168.72.138"}
	I1014 15:02:09.852009   72639 main.go:141] libmachine: (old-k8s-version-399767) Reserved static IP address: 192.168.72.138
	I1014 15:02:09.852021   72639 main.go:141] libmachine: (old-k8s-version-399767) Waiting for SSH to be available...
	I1014 15:02:09.852031   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | Getting to WaitForSSH function...
	I1014 15:02:09.854039   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.854351   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:09.854378   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.854493   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | Using SSH client type: external
	I1014 15:02:09.854517   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa (-rw-------)
	I1014 15:02:09.854547   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.138 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 15:02:09.854559   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | About to run SSH command:
	I1014 15:02:09.854572   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | exit 0
	I1014 15:02:09.979174   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | SSH cmd err, output: <nil>: 
	I1014 15:02:09.979594   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetConfigRaw
	I1014 15:02:09.980252   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetIP
	I1014 15:02:09.983038   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.983469   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:09.983502   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.983891   72639 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/config.json ...
	I1014 15:02:09.984191   72639 machine.go:93] provisionDockerMachine start ...
	I1014 15:02:09.984220   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:09.984487   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:09.986947   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.987361   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:09.987389   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.987514   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:09.987682   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:09.987830   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:09.987924   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:09.988076   72639 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:09.988338   72639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1014 15:02:09.988352   72639 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 15:02:10.098944   72639 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 15:02:10.098968   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetMachineName
	I1014 15:02:10.099242   72639 buildroot.go:166] provisioning hostname "old-k8s-version-399767"
	I1014 15:02:10.099268   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetMachineName
	I1014 15:02:10.099437   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:10.101961   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.102298   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.102320   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.102468   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:10.102670   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.102846   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.102980   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:10.103124   72639 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:10.103337   72639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1014 15:02:10.103353   72639 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-399767 && echo "old-k8s-version-399767" | sudo tee /etc/hostname
	I1014 15:02:10.226037   72639 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-399767
	
	I1014 15:02:10.226069   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:10.228712   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.229059   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.229082   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.229228   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:10.229408   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.229549   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.229670   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:10.229804   72639 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:10.230001   72639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1014 15:02:10.230018   72639 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-399767' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-399767/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-399767' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 15:02:10.344175   72639 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 15:02:10.344206   72639 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 15:02:10.344270   72639 buildroot.go:174] setting up certificates
	I1014 15:02:10.344284   72639 provision.go:84] configureAuth start
	I1014 15:02:10.344302   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetMachineName
	I1014 15:02:10.344632   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetIP
	I1014 15:02:10.347200   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.347587   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.347623   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.347812   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:10.349962   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.350332   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.350364   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.350502   72639 provision.go:143] copyHostCerts
	I1014 15:02:10.350558   72639 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 15:02:10.350574   72639 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 15:02:10.350646   72639 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 15:02:10.350734   72639 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 15:02:10.350742   72639 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 15:02:10.350762   72639 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 15:02:10.350812   72639 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 15:02:10.350819   72639 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 15:02:10.350837   72639 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 15:02:10.350887   72639 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-399767 san=[127.0.0.1 192.168.72.138 localhost minikube old-k8s-version-399767]
	I1014 15:02:10.602118   72639 provision.go:177] copyRemoteCerts
	I1014 15:02:10.602175   72639 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 15:02:10.602199   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:10.604519   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.604744   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.604776   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.604946   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:10.605127   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.605273   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:10.605403   72639 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa Username:docker}
	I1014 15:02:10.689081   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 15:02:10.713512   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1014 15:02:10.738086   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 15:02:10.762274   72639 provision.go:87] duration metric: took 417.977128ms to configureAuth
	I1014 15:02:10.762307   72639 buildroot.go:189] setting minikube options for container-runtime
	I1014 15:02:10.762486   72639 config.go:182] Loaded profile config "old-k8s-version-399767": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1014 15:02:10.762552   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:10.765134   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.765442   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.765469   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.765600   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:10.765756   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.765903   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.765998   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:10.766131   72639 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:10.766297   72639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1014 15:02:10.766311   72639 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 15:02:11.011252   72639 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 15:02:11.011279   72639 machine.go:96] duration metric: took 1.027069423s to provisionDockerMachine
	I1014 15:02:11.011292   72639 start.go:293] postStartSetup for "old-k8s-version-399767" (driver="kvm2")
	I1014 15:02:11.011304   72639 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 15:02:11.011349   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:11.011716   72639 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 15:02:11.011751   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:11.014418   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.014754   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:11.014790   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.014946   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:11.015125   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:11.015260   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:11.015376   72639 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa Username:docker}
	I1014 15:02:11.097883   72639 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 15:02:11.102452   72639 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 15:02:11.102481   72639 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 15:02:11.102551   72639 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 15:02:11.102687   72639 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 15:02:11.102781   72639 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 15:02:11.112774   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:02:11.138211   72639 start.go:296] duration metric: took 126.906035ms for postStartSetup
	I1014 15:02:11.138247   72639 fix.go:56] duration metric: took 18.958741429s for fixHost
	I1014 15:02:11.138270   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:11.140740   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.141100   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:11.141139   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.141280   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:11.141484   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:11.141668   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:11.141811   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:11.141974   72639 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:11.142131   72639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1014 15:02:11.142141   72639 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 15:02:11.248330   72639 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728918131.224010283
	
	I1014 15:02:11.248355   72639 fix.go:216] guest clock: 1728918131.224010283
	I1014 15:02:11.248373   72639 fix.go:229] Guest: 2024-10-14 15:02:11.224010283 +0000 UTC Remote: 2024-10-14 15:02:11.138252894 +0000 UTC m=+233.173555624 (delta=85.757389ms)
	I1014 15:02:11.248399   72639 fix.go:200] guest clock delta is within tolerance: 85.757389ms
	I1014 15:02:11.248406   72639 start.go:83] releasing machines lock for "old-k8s-version-399767", held for 19.068928968s
	I1014 15:02:11.248434   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:11.248692   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetIP
	I1014 15:02:11.251774   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.252134   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:11.252176   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.252358   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:11.252840   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:11.253017   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:11.253104   72639 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 15:02:11.253150   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:11.253232   72639 ssh_runner.go:195] Run: cat /version.json
	I1014 15:02:11.253259   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:11.256105   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.256339   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.256504   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:11.256529   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.256662   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:11.256732   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:11.256771   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.256844   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:11.256932   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:11.257003   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:11.257141   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:11.257131   72639 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa Username:docker}
	I1014 15:02:11.257296   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:11.257414   72639 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa Username:docker}
	I1014 15:02:11.363838   72639 ssh_runner.go:195] Run: systemctl --version
	I1014 15:02:11.370414   72639 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 15:02:11.521232   72639 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 15:02:11.527623   72639 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 15:02:11.527712   72639 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 15:02:11.544532   72639 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 15:02:11.544559   72639 start.go:495] detecting cgroup driver to use...
	I1014 15:02:11.544614   72639 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 15:02:11.561693   72639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 15:02:11.576555   72639 docker.go:217] disabling cri-docker service (if available) ...
	I1014 15:02:11.576622   72639 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 15:02:11.593830   72639 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 15:02:11.608785   72639 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 15:02:11.731034   72639 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 15:02:11.909278   72639 docker.go:233] disabling docker service ...
	I1014 15:02:11.909359   72639 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 15:02:11.931218   72639 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 15:02:11.951710   72639 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 15:02:12.103012   72639 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 15:02:12.252290   72639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 15:02:12.270497   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 15:02:12.293240   72639 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1014 15:02:12.293297   72639 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:12.304881   72639 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 15:02:12.304958   72639 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:12.316294   72639 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:12.328591   72639 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:12.340085   72639 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 15:02:12.351765   72639 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 15:02:12.362454   72639 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 15:02:12.362525   72639 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 15:02:12.376865   72639 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 15:02:12.387779   72639 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:02:12.528541   72639 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 15:02:12.635262   72639 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 15:02:12.635335   72639 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 15:02:12.641070   72639 start.go:563] Will wait 60s for crictl version
	I1014 15:02:12.641121   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:12.645111   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 15:02:12.691103   72639 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 15:02:12.691199   72639 ssh_runner.go:195] Run: crio --version
	I1014 15:02:12.720182   72639 ssh_runner.go:195] Run: crio --version
	I1014 15:02:12.754856   72639 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1014 15:02:12.756005   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetIP
	I1014 15:02:12.759369   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:12.759890   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:12.759924   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:12.760164   72639 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1014 15:02:12.765342   72639 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:02:12.782182   72639 kubeadm.go:883] updating cluster {Name:old-k8s-version-399767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-399767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.138 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 15:02:12.782307   72639 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1014 15:02:12.782374   72639 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:02:12.841797   72639 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1014 15:02:12.841871   72639 ssh_runner.go:195] Run: which lz4
	I1014 15:02:12.846193   72639 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 15:02:12.850982   72639 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 15:02:12.851019   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1014 15:02:09.636366   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:11.637804   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:13.638684   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:11.272626   71679 main.go:141] libmachine: (no-preload-813300) Calling .Start
	I1014 15:02:11.272827   71679 main.go:141] libmachine: (no-preload-813300) Ensuring networks are active...
	I1014 15:02:11.273510   71679 main.go:141] libmachine: (no-preload-813300) Ensuring network default is active
	I1014 15:02:11.273954   71679 main.go:141] libmachine: (no-preload-813300) Ensuring network mk-no-preload-813300 is active
	I1014 15:02:11.274410   71679 main.go:141] libmachine: (no-preload-813300) Getting domain xml...
	I1014 15:02:11.275263   71679 main.go:141] libmachine: (no-preload-813300) Creating domain...
	I1014 15:02:12.614590   71679 main.go:141] libmachine: (no-preload-813300) Waiting to get IP...
	I1014 15:02:12.615572   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:12.616018   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:12.616092   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:12.616013   73776 retry.go:31] will retry after 302.312986ms: waiting for machine to come up
	I1014 15:02:12.919678   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:12.920039   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:12.920074   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:12.920005   73776 retry.go:31] will retry after 371.392955ms: waiting for machine to come up
	I1014 15:02:13.292596   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:13.293214   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:13.293244   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:13.293164   73776 retry.go:31] will retry after 299.379251ms: waiting for machine to come up
	I1014 15:02:13.594808   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:13.595344   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:13.595370   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:13.595297   73776 retry.go:31] will retry after 598.480386ms: waiting for machine to come up
	I1014 15:02:14.195149   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:14.195744   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:14.195775   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:14.195696   73776 retry.go:31] will retry after 567.581822ms: waiting for machine to come up
	I1014 15:02:14.764315   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:14.764863   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:14.764886   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:14.764815   73776 retry.go:31] will retry after 587.597591ms: waiting for machine to come up
	I1014 15:02:15.353495   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:15.353948   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:15.353980   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:15.353896   73776 retry.go:31] will retry after 1.024496536s: waiting for machine to come up
	I1014 15:02:11.889135   72390 node_ready.go:53] node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:13.889200   72390 node_ready.go:49] node "default-k8s-diff-port-201291" has status "Ready":"True"
	I1014 15:02:13.889228   72390 node_ready.go:38] duration metric: took 6.504919545s for node "default-k8s-diff-port-201291" to be "Ready" ...
	I1014 15:02:13.889240   72390 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:02:13.898112   72390 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:15.907127   72390 pod_ready.go:103] pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:14.579304   72639 crio.go:462] duration metric: took 1.733147869s to copy over tarball
	I1014 15:02:14.579405   72639 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 15:02:17.644891   72639 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.06545265s)
	I1014 15:02:17.644954   72639 crio.go:469] duration metric: took 3.065620277s to extract the tarball
	I1014 15:02:17.644979   72639 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 15:02:17.688304   72639 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:02:17.727862   72639 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1014 15:02:17.727888   72639 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1014 15:02:17.727984   72639 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:17.727995   72639 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:17.728006   72639 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:17.728036   72639 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:17.727986   72639 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:17.728104   72639 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1014 15:02:17.728169   72639 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1014 15:02:17.728267   72639 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:17.729900   72639 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:17.729941   72639 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:17.729954   72639 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:17.729900   72639 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1014 15:02:17.729984   72639 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:17.729999   72639 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1014 15:02:17.729913   72639 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:17.730335   72639 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:17.889181   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:17.912728   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:17.919124   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:17.920117   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:17.934314   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1014 15:02:17.951143   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:17.956588   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1014 15:02:17.964968   72639 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1014 15:02:17.965031   72639 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:17.965066   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:16.139535   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:18.637888   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:16.379768   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:16.380165   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:16.380236   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:16.380142   73776 retry.go:31] will retry after 1.022289492s: waiting for machine to come up
	I1014 15:02:17.403892   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:17.404406   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:17.404430   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:17.404383   73776 retry.go:31] will retry after 1.277226075s: waiting for machine to come up
	I1014 15:02:18.683704   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:18.684176   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:18.684200   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:18.684126   73776 retry.go:31] will retry after 2.146714263s: waiting for machine to come up
	I1014 15:02:18.406707   72390 pod_ready.go:103] pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:20.412201   72390 pod_ready.go:103] pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:21.406229   72390 pod_ready.go:93] pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:21.406256   72390 pod_ready.go:82] duration metric: took 7.508120497s for pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.406269   72390 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.413868   72390 pod_ready.go:93] pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:21.413896   72390 pod_ready.go:82] duration metric: took 7.618897ms for pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.413910   72390 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:18.041388   72639 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1014 15:02:18.041436   72639 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:18.041489   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.041504   72639 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1014 15:02:18.041540   72639 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:18.041579   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.069534   72639 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1014 15:02:18.069582   72639 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1014 15:02:18.069631   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.069794   72639 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1014 15:02:18.069821   72639 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:18.069852   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.096492   72639 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1014 15:02:18.096536   72639 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:18.096575   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.104764   72639 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1014 15:02:18.104810   72639 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1014 15:02:18.104816   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:18.104854   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.104876   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:18.104885   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:18.104980   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:18.104984   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1014 15:02:18.105025   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:18.119784   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1014 15:02:18.213816   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:18.241644   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:18.288717   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:18.288820   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1014 15:02:18.288931   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:18.289005   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:18.295481   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1014 15:02:18.376936   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:18.393755   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:18.449717   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:18.449798   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:18.449824   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1014 15:02:18.449904   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:18.461905   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1014 15:02:18.508804   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1014 15:02:18.521502   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1014 15:02:18.612103   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1014 15:02:18.613450   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1014 15:02:18.613548   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1014 15:02:18.613625   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1014 15:02:18.613715   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1014 15:02:18.741774   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:18.888495   72639 cache_images.go:92] duration metric: took 1.16058525s to LoadCachedImages
	W1014 15:02:18.888578   72639 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I1014 15:02:18.888594   72639 kubeadm.go:934] updating node { 192.168.72.138 8443 v1.20.0 crio true true} ...
	I1014 15:02:18.888707   72639 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-399767 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.138
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-399767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 15:02:18.888791   72639 ssh_runner.go:195] Run: crio config
	I1014 15:02:18.943058   72639 cni.go:84] Creating CNI manager for ""
	I1014 15:02:18.943082   72639 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:02:18.943091   72639 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 15:02:18.943108   72639 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.138 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-399767 NodeName:old-k8s-version-399767 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.138"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.138 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1014 15:02:18.943225   72639 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.138
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-399767"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.138
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.138"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 15:02:18.943285   72639 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1014 15:02:18.956635   72639 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 15:02:18.956727   72639 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 15:02:18.970846   72639 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1014 15:02:18.992163   72639 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 15:02:19.012061   72639 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1014 15:02:19.033158   72639 ssh_runner.go:195] Run: grep 192.168.72.138	control-plane.minikube.internal$ /etc/hosts
	I1014 15:02:19.037195   72639 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.138	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:02:19.051127   72639 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:02:19.172992   72639 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:02:19.190545   72639 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767 for IP: 192.168.72.138
	I1014 15:02:19.190572   72639 certs.go:194] generating shared ca certs ...
	I1014 15:02:19.190592   72639 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:02:19.190786   72639 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 15:02:19.190843   72639 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 15:02:19.190853   72639 certs.go:256] generating profile certs ...
	I1014 15:02:19.190973   72639 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/client.key
	I1014 15:02:19.191053   72639 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/apiserver.key.c5ef93ea
	I1014 15:02:19.191108   72639 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/proxy-client.key
	I1014 15:02:19.191264   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 15:02:19.191302   72639 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 15:02:19.191314   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 15:02:19.191345   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 15:02:19.191374   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 15:02:19.191423   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 15:02:19.191477   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:02:19.192328   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 15:02:19.248981   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 15:02:19.281262   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 15:02:19.312859   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 15:02:19.351940   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1014 15:02:19.405710   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 15:02:19.441313   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 15:02:19.481774   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 15:02:19.509433   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 15:02:19.537994   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 15:02:19.564460   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 15:02:19.593632   72639 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 15:02:19.614775   72639 ssh_runner.go:195] Run: openssl version
	I1014 15:02:19.623548   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 15:02:19.636680   72639 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:02:19.642225   72639 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:02:19.642286   72639 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:02:19.648609   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 15:02:19.661130   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 15:02:19.672988   72639 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 15:02:19.678119   72639 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 15:02:19.678189   72639 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 15:02:19.684583   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 15:02:19.696685   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 15:02:19.708338   72639 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 15:02:19.713443   72639 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 15:02:19.713502   72639 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 15:02:19.719482   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 15:02:19.731720   72639 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 15:02:19.739006   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 15:02:19.747558   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 15:02:19.756399   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 15:02:19.764987   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 15:02:19.773320   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 15:02:19.781239   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 15:02:19.788638   72639 kubeadm.go:392] StartCluster: {Name:old-k8s-version-399767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-399767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.138 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 15:02:19.788753   72639 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 15:02:19.788810   72639 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:02:19.829586   72639 cri.go:89] found id: ""
	I1014 15:02:19.829641   72639 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 15:02:19.844632   72639 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1014 15:02:19.844654   72639 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1014 15:02:19.844708   72639 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 15:02:19.860547   72639 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 15:02:19.861848   72639 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-399767" does not appear in /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 15:02:19.862755   72639 kubeconfig.go:62] /home/jenkins/minikube-integration/19790-7836/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-399767" cluster setting kubeconfig missing "old-k8s-version-399767" context setting]
	I1014 15:02:19.863757   72639 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/kubeconfig: {Name:mk17dd47b52fd7a8ee17f563c35b22ef1b7788f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:02:19.927447   72639 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 15:02:19.940830   72639 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.138
	I1014 15:02:19.940919   72639 kubeadm.go:1160] stopping kube-system containers ...
	I1014 15:02:19.940947   72639 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1014 15:02:19.941009   72639 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:02:19.983689   72639 cri.go:89] found id: ""
	I1014 15:02:19.983769   72639 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1014 15:02:20.007079   72639 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:02:20.023868   72639 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:02:20.023896   72639 kubeadm.go:157] found existing configuration files:
	
	I1014 15:02:20.023971   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:02:20.038661   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:02:20.038734   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:02:20.054357   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:02:20.068771   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:02:20.068843   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:02:20.081157   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:02:20.095416   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:02:20.095483   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:02:20.109099   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:02:20.120608   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:02:20.120680   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:02:20.133217   72639 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:02:20.145896   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:20.311840   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:21.472918   72639 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.161037865s)
	I1014 15:02:21.472953   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:21.739827   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:21.833423   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:21.931874   72639 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:02:21.931987   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:22.432595   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:22.932784   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:21.138446   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:23.636836   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:20.833532   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:20.833974   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:20.834000   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:20.833930   73776 retry.go:31] will retry after 1.936414638s: waiting for machine to come up
	I1014 15:02:22.771789   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:22.772183   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:22.772206   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:22.772148   73776 retry.go:31] will retry after 2.51581517s: waiting for machine to come up
	I1014 15:02:25.290082   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:25.290491   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:25.290518   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:25.290453   73776 retry.go:31] will retry after 3.279920525s: waiting for machine to come up
	I1014 15:02:21.420355   72390 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:21.420385   72390 pod_ready.go:82] duration metric: took 6.465669ms for pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.420398   72390 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.427723   72390 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:21.427747   72390 pod_ready.go:82] duration metric: took 7.340946ms for pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.427760   72390 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rh82t" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.433500   72390 pod_ready.go:93] pod "kube-proxy-rh82t" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:21.433526   72390 pod_ready.go:82] duration metric: took 5.757064ms for pod "kube-proxy-rh82t" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.433543   72390 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.802632   72390 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:21.802660   72390 pod_ready.go:82] duration metric: took 369.107697ms for pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.802672   72390 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:23.811046   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:26.308105   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:23.432728   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:23.932296   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:24.432079   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:24.932064   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:25.432201   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:25.932119   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:26.432423   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:26.932675   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:27.432633   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:27.932380   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:25.637287   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:28.137136   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:28.572901   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:28.573383   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:28.573421   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:28.573304   73776 retry.go:31] will retry after 5.283390724s: waiting for machine to come up
	I1014 15:02:28.310800   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:30.400310   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:28.432518   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:28.932871   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:29.432350   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:29.932761   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:30.432621   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:30.932873   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:31.432716   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:31.932364   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:32.432747   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:32.933039   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:30.637300   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:33.136858   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:33.858151   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.858626   71679 main.go:141] libmachine: (no-preload-813300) Found IP for machine: 192.168.61.13
	I1014 15:02:33.858660   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has current primary IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.858670   71679 main.go:141] libmachine: (no-preload-813300) Reserving static IP address...
	I1014 15:02:33.859001   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "no-preload-813300", mac: "52:54:00:ab:86:40", ip: "192.168.61.13"} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:33.859022   71679 main.go:141] libmachine: (no-preload-813300) Reserved static IP address: 192.168.61.13
	I1014 15:02:33.859040   71679 main.go:141] libmachine: (no-preload-813300) DBG | skip adding static IP to network mk-no-preload-813300 - found existing host DHCP lease matching {name: "no-preload-813300", mac: "52:54:00:ab:86:40", ip: "192.168.61.13"}
	I1014 15:02:33.859055   71679 main.go:141] libmachine: (no-preload-813300) DBG | Getting to WaitForSSH function...
	I1014 15:02:33.859065   71679 main.go:141] libmachine: (no-preload-813300) Waiting for SSH to be available...
	I1014 15:02:33.860949   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.861245   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:33.861287   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.861398   71679 main.go:141] libmachine: (no-preload-813300) DBG | Using SSH client type: external
	I1014 15:02:33.861424   71679 main.go:141] libmachine: (no-preload-813300) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa (-rw-------)
	I1014 15:02:33.861460   71679 main.go:141] libmachine: (no-preload-813300) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.13 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 15:02:33.861476   71679 main.go:141] libmachine: (no-preload-813300) DBG | About to run SSH command:
	I1014 15:02:33.861488   71679 main.go:141] libmachine: (no-preload-813300) DBG | exit 0
	I1014 15:02:33.991450   71679 main.go:141] libmachine: (no-preload-813300) DBG | SSH cmd err, output: <nil>: 
	I1014 15:02:33.991854   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetConfigRaw
	I1014 15:02:33.992623   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetIP
	I1014 15:02:33.995514   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.995884   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:33.995908   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.996225   71679 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/config.json ...
	I1014 15:02:33.996549   71679 machine.go:93] provisionDockerMachine start ...
	I1014 15:02:33.996572   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:33.996784   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:33.999385   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.999751   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:33.999789   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.999948   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.000135   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.000312   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.000455   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.000648   71679 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:34.000874   71679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.13 22 <nil> <nil>}
	I1014 15:02:34.000890   71679 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 15:02:34.114981   71679 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 15:02:34.115014   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetMachineName
	I1014 15:02:34.115245   71679 buildroot.go:166] provisioning hostname "no-preload-813300"
	I1014 15:02:34.115272   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetMachineName
	I1014 15:02:34.115421   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.117557   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.117890   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.117929   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.118027   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.118210   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.118365   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.118524   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.118720   71679 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:34.118913   71679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.13 22 <nil> <nil>}
	I1014 15:02:34.118932   71679 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-813300 && echo "no-preload-813300" | sudo tee /etc/hostname
	I1014 15:02:34.246092   71679 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-813300
	
	I1014 15:02:34.246149   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.248672   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.249095   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.249122   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.249331   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.249505   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.249687   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.249860   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.250061   71679 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:34.250272   71679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.13 22 <nil> <nil>}
	I1014 15:02:34.250297   71679 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-813300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-813300/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-813300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 15:02:34.373470   71679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 15:02:34.373512   71679 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 15:02:34.373576   71679 buildroot.go:174] setting up certificates
	I1014 15:02:34.373594   71679 provision.go:84] configureAuth start
	I1014 15:02:34.373613   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetMachineName
	I1014 15:02:34.373903   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetIP
	I1014 15:02:34.376697   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.376986   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.377009   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.377137   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.379469   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.379813   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.379838   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.379981   71679 provision.go:143] copyHostCerts
	I1014 15:02:34.380034   71679 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 15:02:34.380050   71679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 15:02:34.380106   71679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 15:02:34.380194   71679 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 15:02:34.380201   71679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 15:02:34.380223   71679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 15:02:34.380282   71679 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 15:02:34.380288   71679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 15:02:34.380305   71679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 15:02:34.380362   71679 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.no-preload-813300 san=[127.0.0.1 192.168.61.13 localhost minikube no-preload-813300]
	I1014 15:02:34.421281   71679 provision.go:177] copyRemoteCerts
	I1014 15:02:34.421331   71679 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 15:02:34.421353   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.423903   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.424219   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.424248   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.424471   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.424665   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.424807   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.424948   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:02:34.512847   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 15:02:34.539814   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1014 15:02:34.568946   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 15:02:34.593444   71679 provision.go:87] duration metric: took 219.83393ms to configureAuth
	I1014 15:02:34.593467   71679 buildroot.go:189] setting minikube options for container-runtime
	I1014 15:02:34.593661   71679 config.go:182] Loaded profile config "no-preload-813300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 15:02:34.593744   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.596317   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.596626   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.596659   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.596819   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.597008   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.597159   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.597295   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.597433   71679 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:34.597611   71679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.13 22 <nil> <nil>}
	I1014 15:02:34.597631   71679 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 15:02:34.837224   71679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 15:02:34.837244   71679 machine.go:96] duration metric: took 840.680679ms to provisionDockerMachine
	I1014 15:02:34.837256   71679 start.go:293] postStartSetup for "no-preload-813300" (driver="kvm2")
	I1014 15:02:34.837265   71679 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 15:02:34.837281   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:34.837593   71679 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 15:02:34.837625   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.840357   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.840677   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.840702   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.840845   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.841025   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.841193   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.841363   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:02:34.930754   71679 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 15:02:34.935428   71679 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 15:02:34.935457   71679 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 15:02:34.935541   71679 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 15:02:34.935659   71679 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 15:02:34.935795   71679 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 15:02:34.946363   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:02:34.973029   71679 start.go:296] duration metric: took 135.76066ms for postStartSetup
	I1014 15:02:34.973074   71679 fix.go:56] duration metric: took 23.72449375s for fixHost
	I1014 15:02:34.973098   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.975897   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.976211   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.976237   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.976487   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.976687   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.976813   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.976923   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.977075   71679 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:34.977294   71679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.13 22 <nil> <nil>}
	I1014 15:02:34.977309   71679 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 15:02:35.091556   71679 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728918155.078304162
	
	I1014 15:02:35.091581   71679 fix.go:216] guest clock: 1728918155.078304162
	I1014 15:02:35.091590   71679 fix.go:229] Guest: 2024-10-14 15:02:35.078304162 +0000 UTC Remote: 2024-10-14 15:02:34.973079478 +0000 UTC m=+359.485826316 (delta=105.224684ms)
	I1014 15:02:35.091610   71679 fix.go:200] guest clock delta is within tolerance: 105.224684ms
	I1014 15:02:35.091616   71679 start.go:83] releasing machines lock for "no-preload-813300", held for 23.843071366s
	I1014 15:02:35.091641   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:35.091899   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetIP
	I1014 15:02:35.094383   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:35.094712   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:35.094733   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:35.094910   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:35.095353   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:35.095534   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:35.095589   71679 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 15:02:35.095658   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:35.095750   71679 ssh_runner.go:195] Run: cat /version.json
	I1014 15:02:35.095773   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:35.098288   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:35.098316   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:35.098680   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:35.098713   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:35.098743   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:35.098795   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:35.098835   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:35.099003   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:35.099186   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:35.099198   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:35.099367   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:35.099371   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:02:35.099513   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:35.099728   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:02:35.179961   71679 ssh_runner.go:195] Run: systemctl --version
	I1014 15:02:35.205523   71679 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 15:02:35.350662   71679 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 15:02:35.356870   71679 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 15:02:35.356941   71679 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 15:02:35.374967   71679 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 15:02:35.374997   71679 start.go:495] detecting cgroup driver to use...
	I1014 15:02:35.375067   71679 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 15:02:35.393194   71679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 15:02:35.408295   71679 docker.go:217] disabling cri-docker service (if available) ...
	I1014 15:02:35.408362   71679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 15:02:35.423927   71679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 15:02:35.438753   71679 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 15:02:32.809221   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:34.811962   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:35.567539   71679 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 15:02:35.702830   71679 docker.go:233] disabling docker service ...
	I1014 15:02:35.702916   71679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 15:02:35.720822   71679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 15:02:35.735403   71679 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 15:02:35.880532   71679 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 15:02:36.003343   71679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 15:02:36.018230   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 15:02:36.037065   71679 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 15:02:36.037134   71679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.047820   71679 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 15:02:36.047880   71679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.058531   71679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.069760   71679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.081047   71679 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 15:02:36.092384   71679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.103241   71679 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.121771   71679 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.132886   71679 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 15:02:36.143239   71679 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 15:02:36.143308   71679 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 15:02:36.156582   71679 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 15:02:36.165955   71679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:02:36.283857   71679 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 15:02:36.388165   71679 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 15:02:36.388243   71679 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 15:02:36.393324   71679 start.go:563] Will wait 60s for crictl version
	I1014 15:02:36.393378   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:36.397236   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 15:02:36.444749   71679 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 15:02:36.444839   71679 ssh_runner.go:195] Run: crio --version
	I1014 15:02:36.474831   71679 ssh_runner.go:195] Run: crio --version
	I1014 15:02:36.520531   71679 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1014 15:02:33.432474   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:33.932719   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:34.432581   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:34.932863   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:35.432886   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:35.932915   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:36.432852   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:36.932367   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:37.432894   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:37.933035   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:35.637235   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:38.137613   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:36.521865   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetIP
	I1014 15:02:36.524566   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:36.524956   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:36.524984   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:36.525213   71679 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1014 15:02:36.529579   71679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:02:36.542554   71679 kubeadm.go:883] updating cluster {Name:no-preload-813300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.13 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 15:02:36.542701   71679 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 15:02:36.542737   71679 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:02:36.585681   71679 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1014 15:02:36.585719   71679 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1014 15:02:36.585806   71679 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:36.585838   71679 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:36.585865   71679 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:36.585886   71679 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1014 15:02:36.585925   71679 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:36.585814   71679 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:36.585954   71679 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:36.585843   71679 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:36.587263   71679 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:36.587290   71679 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:36.587289   71679 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:36.587289   71679 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:36.587289   71679 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:36.587326   71679 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:36.587289   71679 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:36.587274   71679 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1014 15:02:36.737070   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:36.750146   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:36.750401   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:36.767605   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1014 15:02:36.775005   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:36.797223   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:36.833657   71679 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I1014 15:02:36.833708   71679 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:36.833754   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:36.833875   71679 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I1014 15:02:36.833896   71679 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:36.833929   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:36.850009   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:36.911675   71679 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1014 15:02:36.911720   71679 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:36.911779   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:36.973319   71679 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1014 15:02:36.973354   71679 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:36.973383   71679 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I1014 15:02:36.973394   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:36.973414   71679 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:36.973453   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:36.973456   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:36.973519   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:36.973619   71679 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I1014 15:02:36.973640   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:36.973644   71679 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:36.973671   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:37.044689   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:37.044739   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:37.044815   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:37.044860   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:37.044907   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:37.044947   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:37.166670   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:37.166737   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:37.166794   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:37.166908   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:37.166924   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:37.272802   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:37.272835   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:37.287078   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I1014 15:02:37.287167   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:37.287207   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1014 15:02:37.287240   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1014 15:02:37.287293   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1014 15:02:37.287320   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1014 15:02:37.287367   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1014 15:02:37.354510   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:37.354621   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1014 15:02:37.354659   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I1014 15:02:37.354676   71679 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1014 15:02:37.354700   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1014 15:02:37.354711   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1014 15:02:37.354719   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1014 15:02:37.354790   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I1014 15:02:37.354812   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I1014 15:02:37.354865   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I1014 15:02:37.532403   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:39.443614   71679 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1: (2.089069189s)
	I1014 15:02:39.443676   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I1014 15:02:39.443766   71679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.089027703s)
	I1014 15:02:39.443790   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I1014 15:02:39.443775   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1014 15:02:39.443813   71679 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1014 15:02:39.443833   71679 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.089105476s)
	I1014 15:02:39.443854   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1014 15:02:39.443861   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1014 15:02:39.443911   71679 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.089031069s)
	I1014 15:02:39.443933   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I1014 15:02:39.443986   71679 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.911557292s)
	I1014 15:02:39.444029   71679 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1014 15:02:39.444057   71679 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:39.444111   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:37.309522   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:39.809526   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:38.432551   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:38.932486   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:39.432591   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:39.932694   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:40.432065   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:40.932044   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:41.432313   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:41.933055   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:42.432453   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:42.932258   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:40.137656   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:42.637462   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:41.514958   71679 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.071133048s)
	I1014 15:02:41.514987   71679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.071109487s)
	I1014 15:02:41.515016   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1014 15:02:41.515041   71679 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1014 15:02:41.515046   71679 ssh_runner.go:235] Completed: which crictl: (2.070916553s)
	I1014 15:02:41.514994   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I1014 15:02:41.515093   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1014 15:02:41.515105   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:41.569878   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:43.401013   71679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.885889648s)
	I1014 15:02:43.401053   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I1014 15:02:43.401068   71679 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.831164682s)
	I1014 15:02:43.401082   71679 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1014 15:02:43.401131   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:43.401139   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1014 15:02:41.809862   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:43.810054   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:45.810567   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:43.432054   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:43.932139   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:44.432261   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:44.932517   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:45.432959   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:45.933103   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:46.432845   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:46.932825   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:47.432059   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:47.932745   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:44.639020   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:47.136927   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:49.137423   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:46.799144   71679 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.397987929s)
	I1014 15:02:46.799198   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1014 15:02:46.799201   71679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.398044957s)
	I1014 15:02:46.799222   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1014 15:02:46.799249   71679 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I1014 15:02:46.799295   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1014 15:02:46.799296   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I1014 15:02:46.804398   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1014 15:02:48.971377   71679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.171989764s)
	I1014 15:02:48.971409   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I1014 15:02:48.971436   71679 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1014 15:02:48.971481   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1014 15:02:48.309980   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:50.311361   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:48.432869   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:48.932514   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:49.432754   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:49.932514   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:50.432199   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:50.932861   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:51.432404   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:51.932097   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:52.432569   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:52.933078   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:51.141481   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:53.638306   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:50.935341   71679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.963834471s)
	I1014 15:02:50.935373   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I1014 15:02:50.935401   71679 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1014 15:02:50.935452   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1014 15:02:51.683211   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1014 15:02:51.683268   71679 cache_images.go:123] Successfully loaded all cached images
	I1014 15:02:51.683277   71679 cache_images.go:92] duration metric: took 15.097525447s to LoadCachedImages
	I1014 15:02:51.683293   71679 kubeadm.go:934] updating node { 192.168.61.13 8443 v1.31.1 crio true true} ...
	I1014 15:02:51.683441   71679 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-813300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 15:02:51.683525   71679 ssh_runner.go:195] Run: crio config
	I1014 15:02:51.737769   71679 cni.go:84] Creating CNI manager for ""
	I1014 15:02:51.737790   71679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:02:51.737799   71679 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 15:02:51.737818   71679 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.13 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-813300 NodeName:no-preload-813300 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 15:02:51.737955   71679 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-813300"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.13"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.13"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 15:02:51.738019   71679 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 15:02:51.749175   71679 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 15:02:51.749241   71679 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 15:02:51.759120   71679 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1014 15:02:51.777293   71679 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 15:02:51.795073   71679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I1014 15:02:51.815094   71679 ssh_runner.go:195] Run: grep 192.168.61.13	control-plane.minikube.internal$ /etc/hosts
	I1014 15:02:51.819087   71679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:02:51.831806   71679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:02:51.953191   71679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:02:51.972342   71679 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300 for IP: 192.168.61.13
	I1014 15:02:51.972362   71679 certs.go:194] generating shared ca certs ...
	I1014 15:02:51.972379   71679 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:02:51.972534   71679 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 15:02:51.972583   71679 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 15:02:51.972597   71679 certs.go:256] generating profile certs ...
	I1014 15:02:51.972732   71679 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/client.key
	I1014 15:02:51.972822   71679 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/apiserver.key.4d535e2d
	I1014 15:02:51.972885   71679 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/proxy-client.key
	I1014 15:02:51.973064   71679 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 15:02:51.973102   71679 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 15:02:51.973111   71679 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 15:02:51.973151   71679 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 15:02:51.973180   71679 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 15:02:51.973203   71679 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 15:02:51.973260   71679 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:02:51.974077   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 15:02:52.019451   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 15:02:52.048323   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 15:02:52.086241   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 15:02:52.129342   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 15:02:52.157243   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 15:02:52.189093   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 15:02:52.214980   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 15:02:52.241595   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 15:02:52.270329   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 15:02:52.295153   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 15:02:52.321303   71679 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 15:02:52.339181   71679 ssh_runner.go:195] Run: openssl version
	I1014 15:02:52.345152   71679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 15:02:52.357167   71679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:02:52.362387   71679 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:02:52.362442   71679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:02:52.369003   71679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 15:02:52.380917   71679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 15:02:52.392884   71679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 15:02:52.397876   71679 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 15:02:52.397942   71679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 15:02:52.404038   71679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 15:02:52.415841   71679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 15:02:52.426973   71679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 15:02:52.431848   71679 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 15:02:52.431914   71679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 15:02:52.439851   71679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 15:02:52.455014   71679 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 15:02:52.460088   71679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 15:02:52.466495   71679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 15:02:52.472659   71679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 15:02:52.483107   71679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 15:02:52.491272   71679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 15:02:52.497692   71679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 15:02:52.504352   71679 kubeadm.go:392] StartCluster: {Name:no-preload-813300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.13 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 15:02:52.504456   71679 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 15:02:52.504502   71679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:02:52.544010   71679 cri.go:89] found id: ""
	I1014 15:02:52.544074   71679 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 15:02:52.554296   71679 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1014 15:02:52.554314   71679 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1014 15:02:52.554364   71679 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 15:02:52.564193   71679 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 15:02:52.565367   71679 kubeconfig.go:125] found "no-preload-813300" server: "https://192.168.61.13:8443"
	I1014 15:02:52.567519   71679 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 15:02:52.577268   71679 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.13
	I1014 15:02:52.577296   71679 kubeadm.go:1160] stopping kube-system containers ...
	I1014 15:02:52.577305   71679 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1014 15:02:52.577343   71679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:02:52.614462   71679 cri.go:89] found id: ""
	I1014 15:02:52.614551   71679 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1014 15:02:52.631835   71679 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:02:52.642314   71679 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:02:52.642334   71679 kubeadm.go:157] found existing configuration files:
	
	I1014 15:02:52.642378   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:02:52.652036   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:02:52.652114   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:02:52.662263   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:02:52.672145   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:02:52.672214   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:02:52.682085   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:02:52.691628   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:02:52.691706   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:02:52.701314   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:02:52.711232   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:02:52.711291   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:02:52.722480   71679 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:02:52.733359   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:52.849407   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:53.647528   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:53.863718   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:53.938091   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:54.046445   71679 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:02:54.046544   71679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:54.546715   71679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:55.047285   71679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:55.062239   71679 api_server.go:72] duration metric: took 1.015804644s to wait for apiserver process to appear ...
	I1014 15:02:55.062265   71679 api_server.go:88] waiting for apiserver healthz status ...
	I1014 15:02:55.062296   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:02:55.062806   71679 api_server.go:269] stopped: https://192.168.61.13:8443/healthz: Get "https://192.168.61.13:8443/healthz": dial tcp 192.168.61.13:8443: connect: connection refused
	I1014 15:02:52.811186   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:55.309901   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:53.432335   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:53.932860   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:54.433105   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:54.933031   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:55.432058   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:55.932422   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:56.432618   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:56.932727   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:57.432265   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:57.932733   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:56.136357   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:58.136956   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:55.562748   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:02:58.274557   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 15:02:58.274587   71679 api_server.go:103] status: https://192.168.61.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 15:02:58.274625   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:02:58.296655   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 15:02:58.296682   71679 api_server.go:103] status: https://192.168.61.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 15:02:58.563094   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:02:58.567676   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:58.567717   71679 api_server.go:103] status: https://192.168.61.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:02:59.063266   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:02:59.067656   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:59.067697   71679 api_server.go:103] status: https://192.168.61.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:02:59.563300   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:02:59.569667   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:59.569699   71679 api_server.go:103] status: https://192.168.61.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:03:00.063305   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:03:00.067834   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 200:
	ok
	I1014 15:03:00.079522   71679 api_server.go:141] control plane version: v1.31.1
	I1014 15:03:00.079555   71679 api_server.go:131] duration metric: took 5.017283463s to wait for apiserver health ...
	I1014 15:03:00.079565   71679 cni.go:84] Creating CNI manager for ""
	I1014 15:03:00.079572   71679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:03:00.081793   71679 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 15:03:00.083132   71679 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 15:03:00.095329   71679 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 15:03:00.114972   71679 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 15:03:00.148816   71679 system_pods.go:59] 8 kube-system pods found
	I1014 15:03:00.148849   71679 system_pods.go:61] "coredns-7c65d6cfc9-5cft7" [43bb92da-74e8-4430-a889-3c23ed3fef67] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 15:03:00.148859   71679 system_pods.go:61] "etcd-no-preload-813300" [c3e9137c-855e-49e2-8891-8df57707f75a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 15:03:00.148867   71679 system_pods.go:61] "kube-apiserver-no-preload-813300" [683c2d48-6c84-470c-96e5-0706a1884ee7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 15:03:00.148872   71679 system_pods.go:61] "kube-controller-manager-no-preload-813300" [405991ef-9b48-4770-ba31-a213f0eae077] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 15:03:00.148882   71679 system_pods.go:61] "kube-proxy-jd4t4" [6c5c517b-855e-440c-976e-9c5e5d0710f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1014 15:03:00.148887   71679 system_pods.go:61] "kube-scheduler-no-preload-813300" [e76569e6-74c8-44dd-b283-a82072226686] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 15:03:00.148892   71679 system_pods.go:61] "metrics-server-6867b74b74-br4tl" [5b3425c6-9847-447d-a9ab-076c7cc1634f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:03:00.148896   71679 system_pods.go:61] "storage-provisioner" [2c52e790-afa9-4131-8e28-801eb3f822d5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 15:03:00.148906   71679 system_pods.go:74] duration metric: took 33.908487ms to wait for pod list to return data ...
	I1014 15:03:00.148918   71679 node_conditions.go:102] verifying NodePressure condition ...
	I1014 15:03:00.161000   71679 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 15:03:00.161029   71679 node_conditions.go:123] node cpu capacity is 2
	I1014 15:03:00.161042   71679 node_conditions.go:105] duration metric: took 12.118841ms to run NodePressure ...
	I1014 15:03:00.161067   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:03:00.510702   71679 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1014 15:03:00.515692   71679 kubeadm.go:739] kubelet initialised
	I1014 15:03:00.515715   71679 kubeadm.go:740] duration metric: took 4.986873ms waiting for restarted kubelet to initialise ...
	I1014 15:03:00.515724   71679 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:03:00.521483   71679 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-5cft7" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:57.810518   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:59.811287   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:58.432774   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:58.932666   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:59.433020   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:59.932671   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:00.432717   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:00.932917   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:01.432735   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:01.932668   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:02.432260   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:02.932075   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:00.137257   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:02.137876   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:02.528402   71679 pod_ready.go:103] pod "coredns-7c65d6cfc9-5cft7" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:04.530210   71679 pod_ready.go:93] pod "coredns-7c65d6cfc9-5cft7" in "kube-system" namespace has status "Ready":"True"
	I1014 15:03:04.530241   71679 pod_ready.go:82] duration metric: took 4.008725187s for pod "coredns-7c65d6cfc9-5cft7" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:04.530254   71679 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:02.309134   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:04.311421   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:03.432139   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:03.932241   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:04.432421   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:04.932869   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:05.432972   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:05.933010   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:06.432409   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:06.932778   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:07.432067   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:07.932749   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:04.636760   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:07.136410   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:09.137483   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:06.537318   71679 pod_ready.go:103] pod "etcd-no-preload-813300" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:09.037462   71679 pod_ready.go:103] pod "etcd-no-preload-813300" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:06.810244   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:08.810932   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:10.813334   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:08.432529   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:08.932034   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:09.432042   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:09.933054   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:10.432938   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:10.932661   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:11.432392   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:11.932068   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:12.432066   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:12.932122   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:11.636654   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:13.637819   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:10.536905   71679 pod_ready.go:93] pod "etcd-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:03:10.536932   71679 pod_ready.go:82] duration metric: took 6.006669219s for pod "etcd-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:10.536945   71679 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:12.551283   71679 pod_ready.go:103] pod "kube-apiserver-no-preload-813300" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:13.044142   71679 pod_ready.go:93] pod "kube-apiserver-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:03:13.044166   71679 pod_ready.go:82] duration metric: took 2.507213726s for pod "kube-apiserver-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.044176   71679 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.049176   71679 pod_ready.go:93] pod "kube-controller-manager-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:03:13.049196   71679 pod_ready.go:82] duration metric: took 5.01377ms for pod "kube-controller-manager-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.049206   71679 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-jd4t4" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.053623   71679 pod_ready.go:93] pod "kube-proxy-jd4t4" in "kube-system" namespace has status "Ready":"True"
	I1014 15:03:13.053646   71679 pod_ready.go:82] duration metric: took 4.434586ms for pod "kube-proxy-jd4t4" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.053654   71679 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.559610   71679 pod_ready.go:93] pod "kube-scheduler-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:03:13.559632   71679 pod_ready.go:82] duration metric: took 505.972722ms for pod "kube-scheduler-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.559642   71679 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.309520   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:15.309622   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:13.432556   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:13.932427   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:14.432053   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:14.932460   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:15.432714   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:15.933071   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:16.432567   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:16.932414   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:17.432985   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:17.932960   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:16.136599   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:18.137964   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:15.566234   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:17.567065   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:20.066221   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:17.309837   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:19.310194   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:18.433026   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:18.932015   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:19.432042   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:19.932030   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:20.433050   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:20.932658   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:21.432667   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:21.933045   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:21.933127   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:21.973476   72639 cri.go:89] found id: ""
	I1014 15:03:21.973507   72639 logs.go:282] 0 containers: []
	W1014 15:03:21.973517   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:21.973523   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:21.973584   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:22.011700   72639 cri.go:89] found id: ""
	I1014 15:03:22.011732   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.011742   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:22.011748   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:22.011814   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:22.047721   72639 cri.go:89] found id: ""
	I1014 15:03:22.047744   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.047752   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:22.047762   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:22.047814   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:22.091618   72639 cri.go:89] found id: ""
	I1014 15:03:22.091644   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.091652   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:22.091657   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:22.091706   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:22.129997   72639 cri.go:89] found id: ""
	I1014 15:03:22.130036   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.130047   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:22.130055   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:22.130114   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:22.168024   72639 cri.go:89] found id: ""
	I1014 15:03:22.168053   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.168061   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:22.168067   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:22.168136   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:22.202633   72639 cri.go:89] found id: ""
	I1014 15:03:22.202660   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.202670   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:22.202677   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:22.202739   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:22.238224   72639 cri.go:89] found id: ""
	I1014 15:03:22.238251   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.238259   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:22.238267   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:22.238278   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:22.251940   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:22.251991   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:22.379777   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:22.379799   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:22.379814   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:22.456468   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:22.456507   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:22.495404   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:22.495433   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:20.636995   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:22.637141   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:22.066371   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:24.566023   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:21.809579   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:24.309010   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:25.048061   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:25.068586   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:25.068658   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:25.121199   72639 cri.go:89] found id: ""
	I1014 15:03:25.121228   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.121237   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:25.121243   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:25.121303   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:25.174705   72639 cri.go:89] found id: ""
	I1014 15:03:25.174738   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.174749   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:25.174757   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:25.174815   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:25.236972   72639 cri.go:89] found id: ""
	I1014 15:03:25.237002   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.237013   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:25.237020   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:25.237077   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:25.276443   72639 cri.go:89] found id: ""
	I1014 15:03:25.276473   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.276483   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:25.276489   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:25.276541   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:25.314573   72639 cri.go:89] found id: ""
	I1014 15:03:25.314623   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.314636   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:25.314645   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:25.314708   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:25.357489   72639 cri.go:89] found id: ""
	I1014 15:03:25.357515   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.357525   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:25.357533   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:25.357595   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:25.397504   72639 cri.go:89] found id: ""
	I1014 15:03:25.397527   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.397538   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:25.397546   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:25.397597   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:25.433139   72639 cri.go:89] found id: ""
	I1014 15:03:25.433162   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.433170   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:25.433179   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:25.433193   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:25.448088   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:25.448121   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:25.522377   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:25.522401   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:25.522415   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:25.595505   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:25.595538   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:25.643478   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:25.643511   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:25.137557   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:27.637096   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:27.067425   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:29.565568   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:26.809419   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:29.309193   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:31.310234   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:28.195236   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:28.208612   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:28.208686   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:28.248538   72639 cri.go:89] found id: ""
	I1014 15:03:28.248569   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.248581   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:28.248588   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:28.248652   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:28.286103   72639 cri.go:89] found id: ""
	I1014 15:03:28.286131   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.286143   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:28.286149   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:28.286209   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:28.321335   72639 cri.go:89] found id: ""
	I1014 15:03:28.321371   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.321383   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:28.321391   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:28.321453   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:28.358538   72639 cri.go:89] found id: ""
	I1014 15:03:28.358571   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.358581   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:28.358588   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:28.358661   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:28.397058   72639 cri.go:89] found id: ""
	I1014 15:03:28.397087   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.397099   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:28.397106   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:28.397175   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:28.434010   72639 cri.go:89] found id: ""
	I1014 15:03:28.434032   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.434040   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:28.434045   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:28.434095   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:28.474646   72639 cri.go:89] found id: ""
	I1014 15:03:28.474672   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.474681   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:28.474687   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:28.474736   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:28.512833   72639 cri.go:89] found id: ""
	I1014 15:03:28.512860   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.512871   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:28.512882   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:28.512894   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:28.526233   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:28.526262   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:28.601366   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:28.601393   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:28.601416   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:28.690261   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:28.690300   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:28.734134   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:28.734158   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:31.290184   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:31.303493   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:31.303558   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:31.341521   72639 cri.go:89] found id: ""
	I1014 15:03:31.341552   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.341563   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:31.341569   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:31.341627   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:31.378811   72639 cri.go:89] found id: ""
	I1014 15:03:31.378839   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.378851   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:31.378859   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:31.378922   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:31.416282   72639 cri.go:89] found id: ""
	I1014 15:03:31.416310   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.416321   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:31.416328   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:31.416392   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:31.456089   72639 cri.go:89] found id: ""
	I1014 15:03:31.456123   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.456134   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:31.456142   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:31.456202   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:31.496429   72639 cri.go:89] found id: ""
	I1014 15:03:31.496468   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.496478   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:31.496485   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:31.496548   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:31.535226   72639 cri.go:89] found id: ""
	I1014 15:03:31.535248   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.535256   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:31.535262   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:31.535321   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:31.572580   72639 cri.go:89] found id: ""
	I1014 15:03:31.572608   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.572623   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:31.572631   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:31.572691   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:31.606736   72639 cri.go:89] found id: ""
	I1014 15:03:31.606759   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.606766   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:31.606774   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:31.606785   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:31.646048   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:31.646078   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:31.696818   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:31.696851   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:31.710099   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:31.710128   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:31.787756   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:31.787783   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:31.787798   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:30.136436   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:32.138037   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:34.139660   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:31.566034   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:33.567029   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:33.809434   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:36.309487   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:34.369392   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:34.383263   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:34.383344   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:34.417763   72639 cri.go:89] found id: ""
	I1014 15:03:34.417797   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.417809   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:34.417816   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:34.417890   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:34.453361   72639 cri.go:89] found id: ""
	I1014 15:03:34.453391   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.453402   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:34.453409   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:34.453488   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:34.490878   72639 cri.go:89] found id: ""
	I1014 15:03:34.490905   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.490913   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:34.490919   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:34.490980   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:34.527554   72639 cri.go:89] found id: ""
	I1014 15:03:34.527584   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.527595   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:34.527603   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:34.527655   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:34.564813   72639 cri.go:89] found id: ""
	I1014 15:03:34.564841   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.564851   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:34.564857   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:34.564903   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:34.599899   72639 cri.go:89] found id: ""
	I1014 15:03:34.599930   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.599942   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:34.599949   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:34.600019   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:34.641686   72639 cri.go:89] found id: ""
	I1014 15:03:34.641717   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.641728   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:34.641735   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:34.641794   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:34.681154   72639 cri.go:89] found id: ""
	I1014 15:03:34.681184   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.681195   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:34.681205   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:34.681218   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:34.719638   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:34.719672   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:34.771687   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:34.771722   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:34.785943   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:34.785972   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:34.861821   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:34.861861   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:34.861875   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:37.441605   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:37.456763   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:37.456828   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:37.494176   72639 cri.go:89] found id: ""
	I1014 15:03:37.494202   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.494210   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:37.494216   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:37.494268   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:37.538802   72639 cri.go:89] found id: ""
	I1014 15:03:37.538834   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.538846   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:37.538853   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:37.538913   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:37.586282   72639 cri.go:89] found id: ""
	I1014 15:03:37.586312   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.586322   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:37.586328   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:37.586397   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:37.632673   72639 cri.go:89] found id: ""
	I1014 15:03:37.632698   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.632709   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:37.632715   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:37.632771   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:37.673340   72639 cri.go:89] found id: ""
	I1014 15:03:37.673364   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.673372   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:37.673377   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:37.673427   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:37.718725   72639 cri.go:89] found id: ""
	I1014 15:03:37.718750   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.718758   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:37.718764   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:37.718807   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:37.760560   72639 cri.go:89] found id: ""
	I1014 15:03:37.760587   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.760597   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:37.760605   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:37.760665   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:37.800912   72639 cri.go:89] found id: ""
	I1014 15:03:37.800941   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.800949   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:37.800957   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:37.800968   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:37.815338   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:37.815363   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:37.893018   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:37.893050   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:37.893067   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:37.978315   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:37.978349   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:36.637635   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:39.136295   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:36.065915   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:38.066310   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:38.810020   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:40.810460   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:38.019760   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:38.019788   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:40.570918   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:40.586058   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:40.586122   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:40.623753   72639 cri.go:89] found id: ""
	I1014 15:03:40.623784   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.623795   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:40.623801   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:40.623862   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:40.663909   72639 cri.go:89] found id: ""
	I1014 15:03:40.663937   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.663946   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:40.663953   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:40.664008   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:40.698572   72639 cri.go:89] found id: ""
	I1014 15:03:40.698615   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.698626   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:40.698633   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:40.698683   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:40.734882   72639 cri.go:89] found id: ""
	I1014 15:03:40.734907   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.734914   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:40.734920   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:40.734976   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:40.768429   72639 cri.go:89] found id: ""
	I1014 15:03:40.768455   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.768462   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:40.768468   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:40.768527   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:40.803429   72639 cri.go:89] found id: ""
	I1014 15:03:40.803456   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.803466   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:40.803474   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:40.803535   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:40.842854   72639 cri.go:89] found id: ""
	I1014 15:03:40.842883   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.842905   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:40.842913   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:40.842988   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:40.879638   72639 cri.go:89] found id: ""
	I1014 15:03:40.879661   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.879669   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:40.879677   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:40.879687   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:40.924949   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:40.924983   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:40.976271   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:40.976304   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:40.991492   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:40.991520   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:41.071418   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:41.071439   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:41.071453   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:41.136877   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:43.637356   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:40.566353   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:43.065982   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:45.066405   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:43.310188   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:45.811549   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:43.652387   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:43.666239   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:43.666317   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:43.705726   72639 cri.go:89] found id: ""
	I1014 15:03:43.705752   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.705761   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:43.705766   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:43.705814   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:43.745648   72639 cri.go:89] found id: ""
	I1014 15:03:43.745672   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.745680   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:43.745685   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:43.745731   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:43.783032   72639 cri.go:89] found id: ""
	I1014 15:03:43.783055   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.783063   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:43.783068   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:43.783115   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:43.820582   72639 cri.go:89] found id: ""
	I1014 15:03:43.820607   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.820617   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:43.820623   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:43.820669   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:43.862312   72639 cri.go:89] found id: ""
	I1014 15:03:43.862338   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.862348   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:43.862353   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:43.862404   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:43.898338   72639 cri.go:89] found id: ""
	I1014 15:03:43.898368   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.898379   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:43.898388   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:43.898448   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:43.934682   72639 cri.go:89] found id: ""
	I1014 15:03:43.934709   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.934719   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:43.934726   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:43.934781   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:43.970209   72639 cri.go:89] found id: ""
	I1014 15:03:43.970237   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.970247   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:43.970257   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:43.970269   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:44.024791   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:44.024832   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:44.038431   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:44.038457   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:44.117255   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:44.117291   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:44.117308   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:44.199397   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:44.199436   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:46.739819   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:46.755553   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:46.755625   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:46.797225   72639 cri.go:89] found id: ""
	I1014 15:03:46.797253   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.797265   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:46.797272   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:46.797335   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:46.832999   72639 cri.go:89] found id: ""
	I1014 15:03:46.833025   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.833036   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:46.833043   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:46.833103   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:46.872711   72639 cri.go:89] found id: ""
	I1014 15:03:46.872733   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.872741   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:46.872746   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:46.872795   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:46.909945   72639 cri.go:89] found id: ""
	I1014 15:03:46.909968   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.909977   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:46.909985   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:46.910046   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:46.946036   72639 cri.go:89] found id: ""
	I1014 15:03:46.946067   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.946080   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:46.946087   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:46.946141   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:46.981772   72639 cri.go:89] found id: ""
	I1014 15:03:46.981806   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.981819   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:46.981828   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:46.981896   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:47.022761   72639 cri.go:89] found id: ""
	I1014 15:03:47.022790   72639 logs.go:282] 0 containers: []
	W1014 15:03:47.022800   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:47.022807   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:47.022869   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:47.057368   72639 cri.go:89] found id: ""
	I1014 15:03:47.057392   72639 logs.go:282] 0 containers: []
	W1014 15:03:47.057400   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:47.057408   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:47.057418   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:47.134369   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:47.134408   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:47.179550   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:47.179586   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:47.233317   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:47.233355   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:47.247598   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:47.247629   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:47.321309   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:45.637760   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:48.136826   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:47.067003   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:49.565410   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:48.309520   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:50.812241   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:49.821955   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:49.836907   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:49.836975   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:49.876651   72639 cri.go:89] found id: ""
	I1014 15:03:49.876682   72639 logs.go:282] 0 containers: []
	W1014 15:03:49.876694   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:49.876713   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:49.876781   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:49.913440   72639 cri.go:89] found id: ""
	I1014 15:03:49.913464   72639 logs.go:282] 0 containers: []
	W1014 15:03:49.913473   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:49.913479   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:49.913535   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:49.949352   72639 cri.go:89] found id: ""
	I1014 15:03:49.949383   72639 logs.go:282] 0 containers: []
	W1014 15:03:49.949395   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:49.949402   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:49.949463   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:49.984599   72639 cri.go:89] found id: ""
	I1014 15:03:49.984629   72639 logs.go:282] 0 containers: []
	W1014 15:03:49.984641   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:49.984649   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:49.984709   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:50.028049   72639 cri.go:89] found id: ""
	I1014 15:03:50.028072   72639 logs.go:282] 0 containers: []
	W1014 15:03:50.028083   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:50.028090   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:50.028166   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:50.062272   72639 cri.go:89] found id: ""
	I1014 15:03:50.062294   72639 logs.go:282] 0 containers: []
	W1014 15:03:50.062302   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:50.062308   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:50.062358   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:50.099722   72639 cri.go:89] found id: ""
	I1014 15:03:50.099750   72639 logs.go:282] 0 containers: []
	W1014 15:03:50.099762   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:50.099769   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:50.099830   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:50.139984   72639 cri.go:89] found id: ""
	I1014 15:03:50.140005   72639 logs.go:282] 0 containers: []
	W1014 15:03:50.140013   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:50.140020   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:50.140032   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:50.218467   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:50.218500   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:50.260600   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:50.260635   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:50.313725   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:50.313757   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:50.328431   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:50.328462   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:50.401334   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:52.901787   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:52.917836   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:52.917902   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:52.955387   72639 cri.go:89] found id: ""
	I1014 15:03:52.955418   72639 logs.go:282] 0 containers: []
	W1014 15:03:52.955431   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:52.955440   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:52.955504   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:52.990890   72639 cri.go:89] found id: ""
	I1014 15:03:52.990924   72639 logs.go:282] 0 containers: []
	W1014 15:03:52.990936   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:52.990945   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:52.991004   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:50.636581   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:53.137639   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:51.566403   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:54.066690   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:53.310174   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:55.809402   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:53.032344   72639 cri.go:89] found id: ""
	I1014 15:03:53.032374   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.032384   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:53.032390   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:53.032458   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:53.073501   72639 cri.go:89] found id: ""
	I1014 15:03:53.073527   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.073537   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:53.073544   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:53.073602   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:53.114273   72639 cri.go:89] found id: ""
	I1014 15:03:53.114307   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.114316   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:53.114334   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:53.114389   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:53.155448   72639 cri.go:89] found id: ""
	I1014 15:03:53.155475   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.155484   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:53.155490   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:53.155539   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:53.191304   72639 cri.go:89] found id: ""
	I1014 15:03:53.191338   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.191350   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:53.191357   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:53.191438   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:53.224664   72639 cri.go:89] found id: ""
	I1014 15:03:53.224691   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.224702   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:53.224727   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:53.224744   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:53.275751   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:53.275786   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:53.289275   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:53.289303   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:53.369828   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:53.369855   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:53.369871   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:53.457248   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:53.457285   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:56.003384   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:56.017722   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:56.017782   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:56.056644   72639 cri.go:89] found id: ""
	I1014 15:03:56.056675   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.056686   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:56.056694   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:56.056757   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:56.094482   72639 cri.go:89] found id: ""
	I1014 15:03:56.094507   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.094517   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:56.094524   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:56.094583   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:56.129884   72639 cri.go:89] found id: ""
	I1014 15:03:56.129913   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.129921   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:56.129926   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:56.129974   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:56.167171   72639 cri.go:89] found id: ""
	I1014 15:03:56.167198   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.167206   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:56.167211   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:56.167264   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:56.204400   72639 cri.go:89] found id: ""
	I1014 15:03:56.204433   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.204442   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:56.204447   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:56.204494   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:56.240407   72639 cri.go:89] found id: ""
	I1014 15:03:56.240437   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.240448   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:56.240456   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:56.240517   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:56.277653   72639 cri.go:89] found id: ""
	I1014 15:03:56.277679   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.277687   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:56.277693   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:56.277738   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:56.313423   72639 cri.go:89] found id: ""
	I1014 15:03:56.313451   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.313459   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:56.313468   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:56.313480   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:56.368094   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:56.368133   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:56.382563   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:56.382621   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:56.455106   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:56.455130   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:56.455144   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:56.532288   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:56.532329   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:55.636007   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:57.637196   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:56.566763   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:59.066227   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:58.309184   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:00.309370   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:59.072469   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:59.089024   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:59.089094   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:59.130798   72639 cri.go:89] found id: ""
	I1014 15:03:59.130829   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.130840   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:59.130848   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:59.130908   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:59.167828   72639 cri.go:89] found id: ""
	I1014 15:03:59.167854   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.167864   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:59.167871   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:59.167932   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:59.223482   72639 cri.go:89] found id: ""
	I1014 15:03:59.223509   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.223520   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:59.223528   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:59.223590   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:59.261186   72639 cri.go:89] found id: ""
	I1014 15:03:59.261231   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.261243   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:59.261251   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:59.261314   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:59.296924   72639 cri.go:89] found id: ""
	I1014 15:03:59.296985   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.297000   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:59.297008   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:59.297084   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:59.333891   72639 cri.go:89] found id: ""
	I1014 15:03:59.333915   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.333923   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:59.333929   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:59.333991   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:59.374106   72639 cri.go:89] found id: ""
	I1014 15:03:59.374134   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.374143   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:59.374150   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:59.374222   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:59.412256   72639 cri.go:89] found id: ""
	I1014 15:03:59.412283   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.412291   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:59.412298   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:59.412308   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:59.492869   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:59.492904   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:59.492923   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:59.576441   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:59.576473   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:59.618638   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:59.618668   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:59.671295   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:59.671331   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:02.184689   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:02.197763   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:02.197833   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:02.231709   72639 cri.go:89] found id: ""
	I1014 15:04:02.231734   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.231746   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:02.231753   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:02.231815   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:02.269259   72639 cri.go:89] found id: ""
	I1014 15:04:02.269291   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.269303   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:02.269311   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:02.269390   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:02.305926   72639 cri.go:89] found id: ""
	I1014 15:04:02.305956   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.305967   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:02.305975   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:02.306034   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:02.349516   72639 cri.go:89] found id: ""
	I1014 15:04:02.349544   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.349557   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:02.349563   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:02.349622   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:02.388334   72639 cri.go:89] found id: ""
	I1014 15:04:02.388361   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.388371   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:02.388376   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:02.388428   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:02.422742   72639 cri.go:89] found id: ""
	I1014 15:04:02.422770   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.422781   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:02.422789   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:02.422850   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:02.463686   72639 cri.go:89] found id: ""
	I1014 15:04:02.463710   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.463718   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:02.463724   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:02.463770   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:02.498352   72639 cri.go:89] found id: ""
	I1014 15:04:02.498383   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.498394   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:02.498404   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:02.498418   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:02.512531   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:02.512561   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:02.585331   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:02.585359   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:02.585373   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:02.667376   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:02.667414   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:02.708101   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:02.708133   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:00.136170   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:02.138198   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:01.566456   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:04.066934   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:02.309906   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:04.310009   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:06.310084   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:05.259839   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:05.273102   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:05.273186   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:05.311745   72639 cri.go:89] found id: ""
	I1014 15:04:05.311768   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.311776   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:05.311787   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:05.311834   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:05.349313   72639 cri.go:89] found id: ""
	I1014 15:04:05.349336   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.349344   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:05.349352   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:05.349416   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:05.388003   72639 cri.go:89] found id: ""
	I1014 15:04:05.388026   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.388034   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:05.388039   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:05.388098   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:05.426636   72639 cri.go:89] found id: ""
	I1014 15:04:05.426665   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.426676   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:05.426683   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:05.426745   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:05.461945   72639 cri.go:89] found id: ""
	I1014 15:04:05.461974   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.461983   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:05.461989   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:05.462049   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:05.497099   72639 cri.go:89] found id: ""
	I1014 15:04:05.497130   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.497142   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:05.497149   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:05.497216   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:05.531621   72639 cri.go:89] found id: ""
	I1014 15:04:05.531652   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.531664   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:05.531671   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:05.531729   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:05.568950   72639 cri.go:89] found id: ""
	I1014 15:04:05.568973   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.568983   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:05.568992   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:05.569012   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:05.624806   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:05.624846   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:05.651912   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:05.651961   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:05.740342   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:05.740369   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:05.740384   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:05.817901   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:05.817932   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:04.636643   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:07.137525   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:06.566519   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:08.567458   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:08.809718   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:10.809968   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:08.360267   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:08.373249   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:08.373325   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:08.409485   72639 cri.go:89] found id: ""
	I1014 15:04:08.409520   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.409535   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:08.409542   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:08.409604   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:08.444977   72639 cri.go:89] found id: ""
	I1014 15:04:08.445000   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.445008   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:08.445014   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:08.445061   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:08.478080   72639 cri.go:89] found id: ""
	I1014 15:04:08.478108   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.478117   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:08.478123   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:08.478169   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:08.511510   72639 cri.go:89] found id: ""
	I1014 15:04:08.511536   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.511545   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:08.511552   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:08.511603   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:08.546260   72639 cri.go:89] found id: ""
	I1014 15:04:08.546285   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.546292   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:08.546299   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:08.546347   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:08.582775   72639 cri.go:89] found id: ""
	I1014 15:04:08.582799   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.582810   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:08.582816   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:08.582875   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:08.619208   72639 cri.go:89] found id: ""
	I1014 15:04:08.619231   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.619239   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:08.619244   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:08.619299   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:08.654823   72639 cri.go:89] found id: ""
	I1014 15:04:08.654849   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.654860   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:08.654870   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:08.654885   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:08.704543   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:08.704574   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:08.718111   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:08.718144   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:08.792267   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:08.792290   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:08.792309   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:08.870178   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:08.870210   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:11.409975   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:11.432171   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:11.432243   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:11.468997   72639 cri.go:89] found id: ""
	I1014 15:04:11.469021   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.469030   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:11.469035   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:11.469094   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:11.504312   72639 cri.go:89] found id: ""
	I1014 15:04:11.504337   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.504346   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:11.504354   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:11.504417   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:11.540628   72639 cri.go:89] found id: ""
	I1014 15:04:11.540654   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.540662   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:11.540667   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:11.540729   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:11.576466   72639 cri.go:89] found id: ""
	I1014 15:04:11.576491   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.576498   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:11.576506   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:11.576550   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:11.611466   72639 cri.go:89] found id: ""
	I1014 15:04:11.611501   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.611512   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:11.611519   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:11.611578   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:11.650089   72639 cri.go:89] found id: ""
	I1014 15:04:11.650116   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.650126   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:11.650133   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:11.650191   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:11.686538   72639 cri.go:89] found id: ""
	I1014 15:04:11.686563   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.686571   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:11.686577   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:11.686654   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:11.725494   72639 cri.go:89] found id: ""
	I1014 15:04:11.725517   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.725524   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:11.725532   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:11.725545   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:11.779062   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:11.779102   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:11.792726   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:11.792753   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:11.867945   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:11.867972   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:11.867986   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:11.952299   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:11.952340   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:09.636140   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:11.636455   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:14.136183   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:10.567626   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:13.065875   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:15.066484   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:13.310523   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:15.811094   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:14.493922   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:14.506754   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:14.506817   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:14.540456   72639 cri.go:89] found id: ""
	I1014 15:04:14.540480   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.540489   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:14.540495   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:14.540545   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:14.574819   72639 cri.go:89] found id: ""
	I1014 15:04:14.574843   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.574853   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:14.574859   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:14.574917   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:14.608834   72639 cri.go:89] found id: ""
	I1014 15:04:14.608859   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.608868   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:14.608873   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:14.608920   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:14.644182   72639 cri.go:89] found id: ""
	I1014 15:04:14.644210   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.644218   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:14.644223   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:14.644283   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:14.679113   72639 cri.go:89] found id: ""
	I1014 15:04:14.679145   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.679156   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:14.679164   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:14.679228   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:14.716111   72639 cri.go:89] found id: ""
	I1014 15:04:14.716142   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.716154   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:14.716167   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:14.716220   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:14.755884   72639 cri.go:89] found id: ""
	I1014 15:04:14.755907   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.755915   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:14.755920   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:14.755968   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:14.794167   72639 cri.go:89] found id: ""
	I1014 15:04:14.794195   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.794207   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:14.794217   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:14.794234   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:14.844828   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:14.844864   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:14.859424   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:14.859451   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:14.936660   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:14.936687   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:14.936703   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:15.017034   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:15.017070   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:17.555604   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:17.570628   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:17.570687   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:17.612919   72639 cri.go:89] found id: ""
	I1014 15:04:17.612943   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.612951   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:17.612956   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:17.613002   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:17.651178   72639 cri.go:89] found id: ""
	I1014 15:04:17.651210   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.651220   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:17.651226   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:17.651278   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:17.687923   72639 cri.go:89] found id: ""
	I1014 15:04:17.687955   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.687966   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:17.687973   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:17.688024   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:17.724759   72639 cri.go:89] found id: ""
	I1014 15:04:17.724790   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.724800   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:17.724807   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:17.724866   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:17.760189   72639 cri.go:89] found id: ""
	I1014 15:04:17.760212   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.760220   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:17.760226   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:17.760274   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:17.797517   72639 cri.go:89] found id: ""
	I1014 15:04:17.797541   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.797549   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:17.797554   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:17.797601   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:17.833238   72639 cri.go:89] found id: ""
	I1014 15:04:17.833261   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.833270   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:17.833275   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:17.833321   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:17.868828   72639 cri.go:89] found id: ""
	I1014 15:04:17.868857   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.868865   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:17.868873   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:17.868883   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:17.956972   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:17.957011   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:16.137357   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:18.636865   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:17.067415   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:19.566146   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:18.310380   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:20.809526   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:18.006354   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:18.006390   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:18.056237   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:18.056271   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:18.070763   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:18.070792   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:18.147471   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:20.648238   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:20.661465   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:20.661534   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:20.695869   72639 cri.go:89] found id: ""
	I1014 15:04:20.695894   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.695902   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:20.695907   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:20.695957   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:20.729271   72639 cri.go:89] found id: ""
	I1014 15:04:20.729295   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.729313   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:20.729319   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:20.729364   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:20.767110   72639 cri.go:89] found id: ""
	I1014 15:04:20.767137   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.767147   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:20.767154   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:20.767209   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:20.802752   72639 cri.go:89] found id: ""
	I1014 15:04:20.802781   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.802791   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:20.802798   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:20.802846   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:20.841958   72639 cri.go:89] found id: ""
	I1014 15:04:20.841987   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.841998   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:20.842005   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:20.842066   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:20.878869   72639 cri.go:89] found id: ""
	I1014 15:04:20.878896   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.878907   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:20.878914   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:20.878974   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:20.913802   72639 cri.go:89] found id: ""
	I1014 15:04:20.913838   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.913852   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:20.913861   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:20.913922   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:20.948350   72639 cri.go:89] found id: ""
	I1014 15:04:20.948378   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.948395   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:20.948403   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:20.948416   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:21.001065   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:21.001098   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:21.014427   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:21.014458   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:21.091386   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:21.091412   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:21.091432   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:21.175255   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:21.175299   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:21.137358   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:23.636623   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:22.066415   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:24.066476   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:22.809925   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:25.309528   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:23.718260   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:23.732366   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:23.732445   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:23.767269   72639 cri.go:89] found id: ""
	I1014 15:04:23.767299   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.767311   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:23.767317   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:23.767379   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:23.808502   72639 cri.go:89] found id: ""
	I1014 15:04:23.808532   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.808543   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:23.808550   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:23.808606   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:23.845632   72639 cri.go:89] found id: ""
	I1014 15:04:23.845664   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.845677   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:23.845685   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:23.845753   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:23.880218   72639 cri.go:89] found id: ""
	I1014 15:04:23.880249   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.880261   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:23.880268   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:23.880332   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:23.915674   72639 cri.go:89] found id: ""
	I1014 15:04:23.915697   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.915705   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:23.915710   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:23.915767   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:23.950526   72639 cri.go:89] found id: ""
	I1014 15:04:23.950559   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.950570   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:23.950578   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:23.950656   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:23.986130   72639 cri.go:89] found id: ""
	I1014 15:04:23.986167   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.986178   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:23.986186   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:23.986246   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:24.027112   72639 cri.go:89] found id: ""
	I1014 15:04:24.027141   72639 logs.go:282] 0 containers: []
	W1014 15:04:24.027154   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:24.027165   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:24.027181   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:24.082559   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:24.082610   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:24.096900   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:24.096929   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:24.173293   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:24.173327   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:24.173341   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:24.256921   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:24.256962   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:26.802073   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:26.817307   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:26.817366   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:26.855777   72639 cri.go:89] found id: ""
	I1014 15:04:26.855805   72639 logs.go:282] 0 containers: []
	W1014 15:04:26.855817   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:26.855825   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:26.855876   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:26.892260   72639 cri.go:89] found id: ""
	I1014 15:04:26.892288   72639 logs.go:282] 0 containers: []
	W1014 15:04:26.892300   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:26.892308   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:26.892369   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:26.931066   72639 cri.go:89] found id: ""
	I1014 15:04:26.931103   72639 logs.go:282] 0 containers: []
	W1014 15:04:26.931114   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:26.931122   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:26.931174   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:26.966890   72639 cri.go:89] found id: ""
	I1014 15:04:26.966923   72639 logs.go:282] 0 containers: []
	W1014 15:04:26.966933   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:26.966941   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:26.967002   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:27.001338   72639 cri.go:89] found id: ""
	I1014 15:04:27.001368   72639 logs.go:282] 0 containers: []
	W1014 15:04:27.001379   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:27.001386   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:27.001454   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:27.041798   72639 cri.go:89] found id: ""
	I1014 15:04:27.041830   72639 logs.go:282] 0 containers: []
	W1014 15:04:27.041839   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:27.041844   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:27.041905   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:27.080248   72639 cri.go:89] found id: ""
	I1014 15:04:27.080279   72639 logs.go:282] 0 containers: []
	W1014 15:04:27.080288   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:27.080293   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:27.080341   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:27.116207   72639 cri.go:89] found id: ""
	I1014 15:04:27.116234   72639 logs.go:282] 0 containers: []
	W1014 15:04:27.116242   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:27.116250   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:27.116264   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:27.191149   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:27.191174   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:27.191203   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:27.275771   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:27.275808   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:27.323223   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:27.323254   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:27.375409   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:27.375455   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:26.137156   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:28.637895   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:26.066790   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:28.565208   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:27.810315   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:30.309211   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:29.890408   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:29.904797   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:29.904853   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:29.938655   72639 cri.go:89] found id: ""
	I1014 15:04:29.938685   72639 logs.go:282] 0 containers: []
	W1014 15:04:29.938698   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:29.938705   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:29.938765   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:29.976477   72639 cri.go:89] found id: ""
	I1014 15:04:29.976508   72639 logs.go:282] 0 containers: []
	W1014 15:04:29.976519   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:29.976526   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:29.976583   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:30.014813   72639 cri.go:89] found id: ""
	I1014 15:04:30.014842   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.014853   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:30.014860   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:30.014926   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:30.050804   72639 cri.go:89] found id: ""
	I1014 15:04:30.050833   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.050844   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:30.050854   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:30.050918   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:30.087921   72639 cri.go:89] found id: ""
	I1014 15:04:30.087946   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.087954   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:30.087959   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:30.088016   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:30.125411   72639 cri.go:89] found id: ""
	I1014 15:04:30.125446   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.125458   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:30.125465   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:30.125519   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:30.162067   72639 cri.go:89] found id: ""
	I1014 15:04:30.162099   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.162110   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:30.162118   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:30.162181   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:30.200376   72639 cri.go:89] found id: ""
	I1014 15:04:30.200406   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.200418   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:30.200435   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:30.200451   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:30.279965   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:30.279992   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:30.280007   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:30.364866   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:30.364900   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:30.408808   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:30.408842   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:30.464473   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:30.464507   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:32.980254   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:32.994254   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:32.994320   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:31.136531   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:33.137201   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:30.566228   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:32.567393   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:35.065955   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:32.810349   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:35.308794   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:33.035996   72639 cri.go:89] found id: ""
	I1014 15:04:33.036025   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.036036   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:33.036043   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:33.036103   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:33.077494   72639 cri.go:89] found id: ""
	I1014 15:04:33.077522   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.077531   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:33.077538   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:33.077585   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:33.112666   72639 cri.go:89] found id: ""
	I1014 15:04:33.112695   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.112705   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:33.112711   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:33.112772   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:33.150229   72639 cri.go:89] found id: ""
	I1014 15:04:33.150266   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.150276   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:33.150282   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:33.150336   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:33.186960   72639 cri.go:89] found id: ""
	I1014 15:04:33.186989   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.187001   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:33.187008   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:33.187062   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:33.223596   72639 cri.go:89] found id: ""
	I1014 15:04:33.223631   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.223641   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:33.223647   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:33.223711   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:33.260137   72639 cri.go:89] found id: ""
	I1014 15:04:33.260162   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.260170   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:33.260175   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:33.260228   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:33.298072   72639 cri.go:89] found id: ""
	I1014 15:04:33.298095   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.298103   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:33.298110   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:33.298121   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:33.379587   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:33.379623   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:33.423427   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:33.423456   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:33.474644   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:33.474683   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:33.488324   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:33.488354   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:33.556257   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:36.056955   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:36.072461   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:36.072536   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:36.109467   72639 cri.go:89] found id: ""
	I1014 15:04:36.109493   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.109502   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:36.109509   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:36.109561   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:36.147985   72639 cri.go:89] found id: ""
	I1014 15:04:36.148012   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.148020   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:36.148025   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:36.148071   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:36.183885   72639 cri.go:89] found id: ""
	I1014 15:04:36.183906   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.183914   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:36.183919   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:36.183968   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:36.220994   72639 cri.go:89] found id: ""
	I1014 15:04:36.221025   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.221036   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:36.221044   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:36.221108   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:36.256586   72639 cri.go:89] found id: ""
	I1014 15:04:36.256612   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.256621   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:36.256627   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:36.256683   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:36.293229   72639 cri.go:89] found id: ""
	I1014 15:04:36.293256   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.293265   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:36.293272   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:36.293339   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:36.329254   72639 cri.go:89] found id: ""
	I1014 15:04:36.329279   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.329290   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:36.329297   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:36.329357   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:36.366495   72639 cri.go:89] found id: ""
	I1014 15:04:36.366526   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.366538   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:36.366548   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:36.366561   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:36.420985   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:36.421018   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:36.435532   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:36.435565   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:36.510459   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:36.510484   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:36.510499   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:36.593057   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:36.593094   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:35.637182   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:37.637348   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:37.066334   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:39.566950   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:37.309629   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:39.809500   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:39.138570   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:39.152280   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:39.152342   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:39.186647   72639 cri.go:89] found id: ""
	I1014 15:04:39.186676   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.186687   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:39.186694   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:39.186754   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:39.223560   72639 cri.go:89] found id: ""
	I1014 15:04:39.223586   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.223594   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:39.223599   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:39.223644   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:39.257835   72639 cri.go:89] found id: ""
	I1014 15:04:39.257867   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.257879   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:39.257886   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:39.257947   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:39.294656   72639 cri.go:89] found id: ""
	I1014 15:04:39.294684   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.294692   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:39.294699   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:39.294750   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:39.333474   72639 cri.go:89] found id: ""
	I1014 15:04:39.333503   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.333513   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:39.333520   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:39.333586   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:39.374385   72639 cri.go:89] found id: ""
	I1014 15:04:39.374414   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.374424   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:39.374435   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:39.374483   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:39.412856   72639 cri.go:89] found id: ""
	I1014 15:04:39.412888   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.412899   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:39.412906   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:39.412966   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:39.463087   72639 cri.go:89] found id: ""
	I1014 15:04:39.463115   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.463127   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:39.463138   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:39.463154   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:39.514309   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:39.514342   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:39.528947   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:39.528972   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:39.603984   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:39.604004   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:39.604016   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:39.685053   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:39.685093   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:42.234178   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:42.247421   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:42.247497   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:42.288496   72639 cri.go:89] found id: ""
	I1014 15:04:42.288521   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.288529   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:42.288535   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:42.288588   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:42.324346   72639 cri.go:89] found id: ""
	I1014 15:04:42.324382   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.324394   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:42.324401   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:42.324469   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:42.362879   72639 cri.go:89] found id: ""
	I1014 15:04:42.362910   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.362922   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:42.362928   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:42.362991   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:42.399347   72639 cri.go:89] found id: ""
	I1014 15:04:42.399375   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.399383   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:42.399389   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:42.399473   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:42.434942   72639 cri.go:89] found id: ""
	I1014 15:04:42.434971   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.434990   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:42.434999   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:42.435063   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:42.470886   72639 cri.go:89] found id: ""
	I1014 15:04:42.470916   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.470928   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:42.470934   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:42.470994   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:42.510713   72639 cri.go:89] found id: ""
	I1014 15:04:42.510742   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.510752   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:42.510758   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:42.510820   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:42.544506   72639 cri.go:89] found id: ""
	I1014 15:04:42.544538   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.544547   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:42.544559   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:42.544570   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:42.588658   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:42.588694   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:42.642165   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:42.642198   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:42.658073   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:42.658110   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:42.730486   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:42.730510   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:42.730524   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:39.637476   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:41.637715   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:44.137654   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:42.065534   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:44.066309   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:41.809932   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:44.309377   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:46.309699   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:45.307806   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:45.321664   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:45.321733   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:45.359670   72639 cri.go:89] found id: ""
	I1014 15:04:45.359697   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.359708   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:45.359715   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:45.359781   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:45.398673   72639 cri.go:89] found id: ""
	I1014 15:04:45.398703   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.398715   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:45.398722   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:45.398784   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:45.441656   72639 cri.go:89] found id: ""
	I1014 15:04:45.441685   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.441697   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:45.441705   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:45.441768   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:45.476159   72639 cri.go:89] found id: ""
	I1014 15:04:45.476188   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.476195   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:45.476201   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:45.476263   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:45.513776   72639 cri.go:89] found id: ""
	I1014 15:04:45.513807   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.513819   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:45.513828   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:45.513894   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:45.550336   72639 cri.go:89] found id: ""
	I1014 15:04:45.550371   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.550382   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:45.550388   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:45.550450   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:45.586668   72639 cri.go:89] found id: ""
	I1014 15:04:45.586697   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.586705   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:45.586711   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:45.586760   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:45.622530   72639 cri.go:89] found id: ""
	I1014 15:04:45.622559   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.622568   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:45.622576   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:45.622589   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:45.674471   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:45.674504   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:45.690430   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:45.690463   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:45.772133   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:45.772165   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:45.772181   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:45.859835   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:45.859880   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:46.636239   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:48.637696   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:46.565440   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:48.569076   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:48.309788   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:50.310209   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:48.434011   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:48.448747   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:48.448826   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:48.493642   72639 cri.go:89] found id: ""
	I1014 15:04:48.493668   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.493680   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:48.493687   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:48.493747   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:48.530298   72639 cri.go:89] found id: ""
	I1014 15:04:48.530327   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.530336   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:48.530344   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:48.530403   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:48.566215   72639 cri.go:89] found id: ""
	I1014 15:04:48.566242   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.566252   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:48.566261   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:48.566325   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:48.604528   72639 cri.go:89] found id: ""
	I1014 15:04:48.604553   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.604561   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:48.604566   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:48.604616   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:48.646152   72639 cri.go:89] found id: ""
	I1014 15:04:48.646180   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.646191   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:48.646198   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:48.646257   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:48.682670   72639 cri.go:89] found id: ""
	I1014 15:04:48.682696   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.682704   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:48.682711   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:48.682762   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:48.722292   72639 cri.go:89] found id: ""
	I1014 15:04:48.722318   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.722326   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:48.722335   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:48.722400   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:48.762474   72639 cri.go:89] found id: ""
	I1014 15:04:48.762506   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.762518   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:48.762528   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:48.762553   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:48.776628   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:48.776652   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:48.849904   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:48.849928   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:48.849941   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:48.927033   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:48.927068   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:48.970775   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:48.970807   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:51.521113   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:51.535318   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:51.535389   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:51.582631   72639 cri.go:89] found id: ""
	I1014 15:04:51.582658   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.582666   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:51.582671   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:51.582721   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:51.655323   72639 cri.go:89] found id: ""
	I1014 15:04:51.655362   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.655371   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:51.655376   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:51.655433   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:51.722837   72639 cri.go:89] found id: ""
	I1014 15:04:51.722863   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.722875   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:51.722882   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:51.722939   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:51.759917   72639 cri.go:89] found id: ""
	I1014 15:04:51.759946   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.759957   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:51.759963   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:51.760023   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:51.798656   72639 cri.go:89] found id: ""
	I1014 15:04:51.798689   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.798702   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:51.798711   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:51.798777   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:51.839285   72639 cri.go:89] found id: ""
	I1014 15:04:51.839312   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.839324   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:51.839334   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:51.839391   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:51.876997   72639 cri.go:89] found id: ""
	I1014 15:04:51.877028   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.877038   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:51.877045   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:51.877091   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:51.913991   72639 cri.go:89] found id: ""
	I1014 15:04:51.914020   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.914028   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:51.914036   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:51.914046   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:51.993392   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:51.993427   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:52.039722   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:52.039756   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:52.090901   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:52.090937   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:52.105014   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:52.105052   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:52.175505   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:51.137343   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:53.636660   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:50.575054   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:53.067208   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:52.809933   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:54.810498   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:54.676549   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:54.690113   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:54.690204   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:54.726478   72639 cri.go:89] found id: ""
	I1014 15:04:54.726511   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.726523   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:54.726538   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:54.726611   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:54.764990   72639 cri.go:89] found id: ""
	I1014 15:04:54.765017   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.765025   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:54.765031   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:54.765095   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:54.804779   72639 cri.go:89] found id: ""
	I1014 15:04:54.804808   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.804819   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:54.804828   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:54.804886   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:54.848657   72639 cri.go:89] found id: ""
	I1014 15:04:54.848682   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.848698   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:54.848705   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:54.848765   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:54.886806   72639 cri.go:89] found id: ""
	I1014 15:04:54.886834   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.886845   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:54.886853   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:54.886912   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:54.923297   72639 cri.go:89] found id: ""
	I1014 15:04:54.923323   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.923330   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:54.923335   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:54.923380   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:54.966297   72639 cri.go:89] found id: ""
	I1014 15:04:54.966321   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.966329   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:54.966334   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:54.966382   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:55.012047   72639 cri.go:89] found id: ""
	I1014 15:04:55.012071   72639 logs.go:282] 0 containers: []
	W1014 15:04:55.012079   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:55.012087   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:55.012097   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:55.066031   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:55.066063   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:55.080954   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:55.080981   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:55.159644   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:55.159670   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:55.159683   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:55.243303   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:55.243341   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:57.784555   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:57.799051   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:57.799132   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:57.841084   72639 cri.go:89] found id: ""
	I1014 15:04:57.841108   72639 logs.go:282] 0 containers: []
	W1014 15:04:57.841115   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:57.841121   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:57.841167   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:57.881510   72639 cri.go:89] found id: ""
	I1014 15:04:57.881542   72639 logs.go:282] 0 containers: []
	W1014 15:04:57.881555   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:57.881562   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:57.881624   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:57.916893   72639 cri.go:89] found id: ""
	I1014 15:04:57.916923   72639 logs.go:282] 0 containers: []
	W1014 15:04:57.916934   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:57.916940   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:57.916988   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:57.956991   72639 cri.go:89] found id: ""
	I1014 15:04:57.957023   72639 logs.go:282] 0 containers: []
	W1014 15:04:57.957036   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:57.957046   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:57.957118   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:57.993765   72639 cri.go:89] found id: ""
	I1014 15:04:57.993792   72639 logs.go:282] 0 containers: []
	W1014 15:04:57.993803   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:57.993809   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:57.993869   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:56.136994   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:58.137736   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:55.566021   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:57.567950   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:00.068276   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:57.310643   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:59.808898   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:58.032044   72639 cri.go:89] found id: ""
	I1014 15:04:58.032085   72639 logs.go:282] 0 containers: []
	W1014 15:04:58.032098   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:58.032107   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:58.032173   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:58.069733   72639 cri.go:89] found id: ""
	I1014 15:04:58.069754   72639 logs.go:282] 0 containers: []
	W1014 15:04:58.069762   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:58.069767   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:58.069813   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:58.105851   72639 cri.go:89] found id: ""
	I1014 15:04:58.105880   72639 logs.go:282] 0 containers: []
	W1014 15:04:58.105891   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:58.105901   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:58.105914   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:58.159922   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:58.159956   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:58.173779   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:58.173802   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:58.253551   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:58.253576   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:58.253591   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:58.342607   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:58.342647   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:00.884705   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:00.900147   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:00.900215   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:00.940372   72639 cri.go:89] found id: ""
	I1014 15:05:00.940402   72639 logs.go:282] 0 containers: []
	W1014 15:05:00.940413   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:00.940420   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:00.940489   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:00.981400   72639 cri.go:89] found id: ""
	I1014 15:05:00.981431   72639 logs.go:282] 0 containers: []
	W1014 15:05:00.981441   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:00.981447   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:00.981517   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:01.021981   72639 cri.go:89] found id: ""
	I1014 15:05:01.022002   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.022011   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:01.022016   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:01.022067   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:01.056976   72639 cri.go:89] found id: ""
	I1014 15:05:01.057005   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.057013   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:01.057020   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:01.057063   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:01.092702   72639 cri.go:89] found id: ""
	I1014 15:05:01.092732   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.092739   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:01.092745   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:01.092803   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:01.128861   72639 cri.go:89] found id: ""
	I1014 15:05:01.128892   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.128902   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:01.128908   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:01.128958   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:01.162672   72639 cri.go:89] found id: ""
	I1014 15:05:01.162702   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.162712   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:01.162719   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:01.162791   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:01.202724   72639 cri.go:89] found id: ""
	I1014 15:05:01.202751   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.202761   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:01.202770   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:01.202785   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:01.280702   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:01.280723   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:01.280735   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:01.362909   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:01.362943   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:01.406737   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:01.406766   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:01.460090   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:01.460125   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:00.636730   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:03.136587   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:02.568415   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:05.066568   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:01.809661   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:04.309079   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:06.309544   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:03.975661   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:03.989811   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:03.989874   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:04.028396   72639 cri.go:89] found id: ""
	I1014 15:05:04.028426   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.028438   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:04.028445   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:04.028499   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:04.065871   72639 cri.go:89] found id: ""
	I1014 15:05:04.065901   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.065912   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:04.065919   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:04.065980   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:04.103155   72639 cri.go:89] found id: ""
	I1014 15:05:04.103184   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.103192   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:04.103198   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:04.103248   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:04.139503   72639 cri.go:89] found id: ""
	I1014 15:05:04.139531   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.139539   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:04.139545   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:04.139601   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:04.171638   72639 cri.go:89] found id: ""
	I1014 15:05:04.171663   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.171671   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:04.171676   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:04.171734   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:04.213720   72639 cri.go:89] found id: ""
	I1014 15:05:04.213751   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.213760   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:04.213766   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:04.213815   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:04.248088   72639 cri.go:89] found id: ""
	I1014 15:05:04.248109   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.248117   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:04.248121   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:04.248183   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:04.286454   72639 cri.go:89] found id: ""
	I1014 15:05:04.286479   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.286487   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:04.286495   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:04.286506   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:04.339564   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:04.339599   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:04.353034   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:04.353061   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:04.432764   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:04.432786   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:04.432797   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:04.514561   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:04.514613   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:07.057507   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:07.072798   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:07.072873   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:07.113672   72639 cri.go:89] found id: ""
	I1014 15:05:07.113694   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.113701   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:07.113706   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:07.113761   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:07.149321   72639 cri.go:89] found id: ""
	I1014 15:05:07.149348   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.149357   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:07.149362   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:07.149416   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:07.185717   72639 cri.go:89] found id: ""
	I1014 15:05:07.185748   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.185760   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:07.185768   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:07.185822   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:07.225747   72639 cri.go:89] found id: ""
	I1014 15:05:07.225772   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.225783   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:07.225791   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:07.225843   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:07.265834   72639 cri.go:89] found id: ""
	I1014 15:05:07.265864   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.265875   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:07.265882   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:07.265944   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:07.300595   72639 cri.go:89] found id: ""
	I1014 15:05:07.300622   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.300631   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:07.300637   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:07.300686   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:07.343249   72639 cri.go:89] found id: ""
	I1014 15:05:07.343280   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.343291   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:07.343298   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:07.343365   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:07.379525   72639 cri.go:89] found id: ""
	I1014 15:05:07.379549   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.379557   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:07.379564   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:07.379576   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:07.393622   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:07.393653   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:07.473973   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:07.473998   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:07.474013   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:07.556937   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:07.556971   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:07.602224   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:07.602249   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:05.137157   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:07.137297   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:09.137708   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:07.066795   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:09.566723   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:08.809562   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:11.309821   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:10.156920   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:10.170971   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:10.171037   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:10.206568   72639 cri.go:89] found id: ""
	I1014 15:05:10.206610   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.206623   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:10.206630   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:10.206689   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:10.249075   72639 cri.go:89] found id: ""
	I1014 15:05:10.249101   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.249110   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:10.249121   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:10.249175   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:10.285620   72639 cri.go:89] found id: ""
	I1014 15:05:10.285649   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.285660   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:10.285667   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:10.285730   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:10.322291   72639 cri.go:89] found id: ""
	I1014 15:05:10.322314   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.322322   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:10.322327   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:10.322379   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:10.356691   72639 cri.go:89] found id: ""
	I1014 15:05:10.356720   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.356730   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:10.356738   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:10.356802   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:10.401192   72639 cri.go:89] found id: ""
	I1014 15:05:10.401223   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.401234   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:10.401242   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:10.401303   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:10.438198   72639 cri.go:89] found id: ""
	I1014 15:05:10.438225   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.438236   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:10.438243   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:10.438380   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:10.474142   72639 cri.go:89] found id: ""
	I1014 15:05:10.474166   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.474174   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:10.474181   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:10.474193   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:10.546549   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:10.546569   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:10.546582   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:10.624235   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:10.624268   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:10.664896   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:10.664926   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:10.719425   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:10.719464   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:11.637824   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:14.139552   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:11.566806   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:14.066803   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:13.809728   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:16.310153   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:13.234162   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:13.247614   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:13.247689   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:13.285040   72639 cri.go:89] found id: ""
	I1014 15:05:13.285068   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.285078   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:13.285086   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:13.285154   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:13.334084   72639 cri.go:89] found id: ""
	I1014 15:05:13.334125   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.334133   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:13.334139   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:13.334204   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:13.369164   72639 cri.go:89] found id: ""
	I1014 15:05:13.369199   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.369211   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:13.369223   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:13.369285   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:13.405202   72639 cri.go:89] found id: ""
	I1014 15:05:13.405232   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.405244   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:13.405252   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:13.405304   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:13.443271   72639 cri.go:89] found id: ""
	I1014 15:05:13.443302   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.443311   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:13.443317   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:13.443369   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:13.483541   72639 cri.go:89] found id: ""
	I1014 15:05:13.483570   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.483580   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:13.483588   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:13.483650   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:13.518580   72639 cri.go:89] found id: ""
	I1014 15:05:13.518622   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.518633   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:13.518641   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:13.518701   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:13.553638   72639 cri.go:89] found id: ""
	I1014 15:05:13.553668   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.553678   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:13.553688   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:13.553702   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:13.605379   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:13.605413   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:13.620525   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:13.620556   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:13.699628   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:13.699658   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:13.699672   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:13.778006   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:13.778046   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:16.316703   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:16.331511   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:16.331577   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:16.367045   72639 cri.go:89] found id: ""
	I1014 15:05:16.367075   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.367083   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:16.367089   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:16.367144   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:16.403240   72639 cri.go:89] found id: ""
	I1014 15:05:16.403264   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.403274   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:16.403285   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:16.403344   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:16.438570   72639 cri.go:89] found id: ""
	I1014 15:05:16.438612   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.438625   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:16.438632   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:16.438694   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:16.477153   72639 cri.go:89] found id: ""
	I1014 15:05:16.477174   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.477182   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:16.477187   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:16.477232   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:16.516308   72639 cri.go:89] found id: ""
	I1014 15:05:16.516336   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.516348   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:16.516355   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:16.516421   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:16.551337   72639 cri.go:89] found id: ""
	I1014 15:05:16.551365   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.551375   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:16.551383   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:16.551450   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:16.587073   72639 cri.go:89] found id: ""
	I1014 15:05:16.587105   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.587117   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:16.587125   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:16.587183   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:16.623940   72639 cri.go:89] found id: ""
	I1014 15:05:16.623962   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.623970   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:16.623978   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:16.623989   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:16.671593   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:16.671618   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:16.723057   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:16.723092   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:16.737623   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:16.737656   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:16.809539   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:16.809569   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:16.809592   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:16.636818   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:18.637340   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:16.566523   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:19.065985   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:18.809554   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:21.309691   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:19.390406   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:19.404850   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:19.404928   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:19.446931   72639 cri.go:89] found id: ""
	I1014 15:05:19.446962   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.446973   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:19.446980   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:19.447043   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:19.488112   72639 cri.go:89] found id: ""
	I1014 15:05:19.488136   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.488144   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:19.488150   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:19.488208   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:19.523333   72639 cri.go:89] found id: ""
	I1014 15:05:19.523365   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.523382   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:19.523389   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:19.523447   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:19.557887   72639 cri.go:89] found id: ""
	I1014 15:05:19.557910   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.557918   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:19.557927   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:19.557972   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:19.593792   72639 cri.go:89] found id: ""
	I1014 15:05:19.593815   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.593822   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:19.593873   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:19.593922   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:19.628291   72639 cri.go:89] found id: ""
	I1014 15:05:19.628324   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.628335   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:19.628343   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:19.628405   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:19.664088   72639 cri.go:89] found id: ""
	I1014 15:05:19.664118   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.664130   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:19.664138   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:19.664211   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:19.700825   72639 cri.go:89] found id: ""
	I1014 15:05:19.700853   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.700863   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:19.700873   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:19.700886   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:19.741631   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:19.741666   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:19.792667   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:19.792706   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:19.806928   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:19.806965   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:19.880030   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:19.880059   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:19.880073   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:22.465251   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:22.479031   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:22.479096   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:22.519123   72639 cri.go:89] found id: ""
	I1014 15:05:22.519147   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.519158   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:22.519171   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:22.519235   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:22.552250   72639 cri.go:89] found id: ""
	I1014 15:05:22.552277   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.552287   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:22.552294   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:22.552354   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:22.594213   72639 cri.go:89] found id: ""
	I1014 15:05:22.594243   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.594253   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:22.594261   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:22.594310   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:22.630081   72639 cri.go:89] found id: ""
	I1014 15:05:22.630110   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.630121   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:22.630129   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:22.630195   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:22.665454   72639 cri.go:89] found id: ""
	I1014 15:05:22.665485   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.665497   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:22.665505   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:22.665568   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:22.710697   72639 cri.go:89] found id: ""
	I1014 15:05:22.710725   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.710734   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:22.710742   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:22.710798   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:22.748486   72639 cri.go:89] found id: ""
	I1014 15:05:22.748516   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.748527   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:22.748534   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:22.748594   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:22.784646   72639 cri.go:89] found id: ""
	I1014 15:05:22.784674   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.784684   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:22.784695   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:22.784709   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:22.797853   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:22.797880   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:22.875382   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:22.875406   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:22.875422   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:22.957055   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:22.957089   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:20.638448   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:23.137051   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:21.066950   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:23.566775   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:23.309958   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:25.810168   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:23.008642   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:23.008672   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:25.561277   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:25.575543   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:25.575606   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:25.614260   72639 cri.go:89] found id: ""
	I1014 15:05:25.614283   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.614291   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:25.614296   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:25.614353   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:25.654267   72639 cri.go:89] found id: ""
	I1014 15:05:25.654295   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.654307   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:25.654314   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:25.654385   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:25.707597   72639 cri.go:89] found id: ""
	I1014 15:05:25.707626   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.707637   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:25.707644   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:25.707707   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:25.747477   72639 cri.go:89] found id: ""
	I1014 15:05:25.747500   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.747508   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:25.747513   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:25.747571   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:25.785245   72639 cri.go:89] found id: ""
	I1014 15:05:25.785270   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.785279   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:25.785288   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:25.785342   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:25.820619   72639 cri.go:89] found id: ""
	I1014 15:05:25.820643   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.820651   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:25.820665   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:25.820722   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:25.861644   72639 cri.go:89] found id: ""
	I1014 15:05:25.861665   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.861673   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:25.861678   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:25.861724   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:25.901009   72639 cri.go:89] found id: ""
	I1014 15:05:25.901032   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.901046   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:25.901056   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:25.901068   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:25.942918   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:25.942941   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:25.993931   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:25.993964   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:26.008252   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:26.008280   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:26.087316   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:26.087336   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:26.087347   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:25.636727   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:27.637053   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:26.066529   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:28.567224   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:28.308855   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:30.811310   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:28.667377   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:28.682586   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:28.682682   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:28.729576   72639 cri.go:89] found id: ""
	I1014 15:05:28.729600   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.729608   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:28.729614   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:28.729673   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:28.766637   72639 cri.go:89] found id: ""
	I1014 15:05:28.766669   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.766682   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:28.766690   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:28.766762   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:28.802280   72639 cri.go:89] found id: ""
	I1014 15:05:28.802308   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.802317   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:28.802322   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:28.802395   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:28.840788   72639 cri.go:89] found id: ""
	I1014 15:05:28.840822   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.840833   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:28.840841   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:28.840898   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:28.878403   72639 cri.go:89] found id: ""
	I1014 15:05:28.878437   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.878447   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:28.878453   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:28.878505   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:28.919054   72639 cri.go:89] found id: ""
	I1014 15:05:28.919082   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.919090   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:28.919096   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:28.919146   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:28.955097   72639 cri.go:89] found id: ""
	I1014 15:05:28.955124   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.955134   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:28.955142   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:28.955214   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:28.995681   72639 cri.go:89] found id: ""
	I1014 15:05:28.995711   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.995722   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:28.995731   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:28.995746   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:29.073041   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:29.073066   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:29.073083   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:29.152803   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:29.152838   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:29.192205   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:29.192239   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:29.248128   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:29.248166   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:31.762647   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:31.776372   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:31.776454   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:31.812234   72639 cri.go:89] found id: ""
	I1014 15:05:31.812259   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.812268   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:31.812275   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:31.812347   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:31.850248   72639 cri.go:89] found id: ""
	I1014 15:05:31.850277   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.850294   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:31.850301   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:31.850363   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:31.887768   72639 cri.go:89] found id: ""
	I1014 15:05:31.887796   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.887808   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:31.887816   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:31.887870   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:31.923434   72639 cri.go:89] found id: ""
	I1014 15:05:31.923464   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.923476   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:31.923483   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:31.923547   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:31.961027   72639 cri.go:89] found id: ""
	I1014 15:05:31.961055   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.961066   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:31.961073   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:31.961135   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:31.996222   72639 cri.go:89] found id: ""
	I1014 15:05:31.996250   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.996260   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:31.996267   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:31.996329   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:32.034396   72639 cri.go:89] found id: ""
	I1014 15:05:32.034441   72639 logs.go:282] 0 containers: []
	W1014 15:05:32.034452   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:32.034460   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:32.034528   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:32.080105   72639 cri.go:89] found id: ""
	I1014 15:05:32.080142   72639 logs.go:282] 0 containers: []
	W1014 15:05:32.080153   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:32.080164   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:32.080178   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:32.161120   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:32.161151   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:32.213511   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:32.213546   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:32.271250   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:32.271287   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:32.285452   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:32.285483   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:32.366108   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:30.136896   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:32.138906   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:31.066229   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:33.066370   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:35.067821   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:33.309846   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:35.310713   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:34.867317   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:34.882058   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:34.882125   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:34.926220   72639 cri.go:89] found id: ""
	I1014 15:05:34.926251   72639 logs.go:282] 0 containers: []
	W1014 15:05:34.926261   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:34.926268   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:34.926341   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:34.965657   72639 cri.go:89] found id: ""
	I1014 15:05:34.965691   72639 logs.go:282] 0 containers: []
	W1014 15:05:34.965702   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:34.965709   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:34.965775   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:35.002422   72639 cri.go:89] found id: ""
	I1014 15:05:35.002446   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.002454   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:35.002459   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:35.002523   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:35.040029   72639 cri.go:89] found id: ""
	I1014 15:05:35.040057   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.040067   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:35.040073   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:35.040137   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:35.077041   72639 cri.go:89] found id: ""
	I1014 15:05:35.077067   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.077075   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:35.077080   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:35.077129   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:35.113723   72639 cri.go:89] found id: ""
	I1014 15:05:35.113754   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.113763   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:35.113770   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:35.113854   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:35.152003   72639 cri.go:89] found id: ""
	I1014 15:05:35.152025   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.152033   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:35.152038   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:35.152084   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:35.186707   72639 cri.go:89] found id: ""
	I1014 15:05:35.186735   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.186746   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:35.186756   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:35.186769   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:35.267899   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:35.267941   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:35.310382   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:35.310414   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:35.364811   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:35.364852   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:35.378359   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:35.378386   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:35.453522   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:37.953807   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:37.967515   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:37.967579   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:34.637257   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:37.137643   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:37.566344   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:39.566704   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:37.810414   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:40.308798   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:38.007923   72639 cri.go:89] found id: ""
	I1014 15:05:38.007955   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.007964   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:38.007969   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:38.008023   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:38.047451   72639 cri.go:89] found id: ""
	I1014 15:05:38.047476   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.047484   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:38.047490   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:38.047542   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:38.087141   72639 cri.go:89] found id: ""
	I1014 15:05:38.087165   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.087174   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:38.087186   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:38.087234   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:38.126556   72639 cri.go:89] found id: ""
	I1014 15:05:38.126583   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.126604   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:38.126612   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:38.126670   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:38.165318   72639 cri.go:89] found id: ""
	I1014 15:05:38.165341   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.165350   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:38.165356   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:38.165400   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:38.199498   72639 cri.go:89] found id: ""
	I1014 15:05:38.199533   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.199544   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:38.199553   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:38.199618   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:38.235030   72639 cri.go:89] found id: ""
	I1014 15:05:38.235058   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.235067   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:38.235072   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:38.235129   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:38.268900   72639 cri.go:89] found id: ""
	I1014 15:05:38.268926   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.268935   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:38.268943   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:38.268957   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:38.282503   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:38.282532   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:38.357943   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:38.357972   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:38.357987   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:38.448417   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:38.448453   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:38.490023   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:38.490049   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:41.045691   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:41.061188   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:41.061251   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:41.102885   72639 cri.go:89] found id: ""
	I1014 15:05:41.102909   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.102917   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:41.102923   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:41.102971   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:41.139402   72639 cri.go:89] found id: ""
	I1014 15:05:41.139427   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.139437   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:41.139444   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:41.139501   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:41.179881   72639 cri.go:89] found id: ""
	I1014 15:05:41.179926   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.179939   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:41.179946   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:41.180008   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:41.215861   72639 cri.go:89] found id: ""
	I1014 15:05:41.215897   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.215910   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:41.215919   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:41.215987   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:41.251314   72639 cri.go:89] found id: ""
	I1014 15:05:41.251341   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.251351   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:41.251355   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:41.251404   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:41.285986   72639 cri.go:89] found id: ""
	I1014 15:05:41.286010   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.286017   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:41.286025   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:41.286071   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:41.323730   72639 cri.go:89] found id: ""
	I1014 15:05:41.323756   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.323764   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:41.323769   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:41.323816   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:41.360787   72639 cri.go:89] found id: ""
	I1014 15:05:41.360817   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.360825   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:41.360834   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:41.360847   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:41.403137   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:41.403172   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:41.459217   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:41.459253   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:41.473529   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:41.473558   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:41.547384   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:41.547405   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:41.547416   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:39.637477   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:42.137176   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:41.569245   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:44.066760   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:42.309212   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:44.310281   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:44.129494   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:44.144061   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:44.144129   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:44.185872   72639 cri.go:89] found id: ""
	I1014 15:05:44.185896   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.185904   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:44.185909   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:44.185955   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:44.222618   72639 cri.go:89] found id: ""
	I1014 15:05:44.222648   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.222658   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:44.222663   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:44.222723   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:44.260730   72639 cri.go:89] found id: ""
	I1014 15:05:44.260761   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.260773   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:44.260780   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:44.260872   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:44.303033   72639 cri.go:89] found id: ""
	I1014 15:05:44.303124   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.303141   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:44.303150   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:44.303223   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:44.344573   72639 cri.go:89] found id: ""
	I1014 15:05:44.344600   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.344609   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:44.344614   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:44.344660   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:44.386091   72639 cri.go:89] found id: ""
	I1014 15:05:44.386122   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.386131   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:44.386137   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:44.386199   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:44.424609   72639 cri.go:89] found id: ""
	I1014 15:05:44.424634   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.424644   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:44.424656   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:44.424724   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:44.463997   72639 cri.go:89] found id: ""
	I1014 15:05:44.464023   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.464033   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:44.464043   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:44.464057   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:44.516883   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:44.516921   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:44.530785   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:44.530820   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:44.605202   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:44.605229   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:44.605245   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:44.685277   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:44.685312   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:47.227851   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:47.242737   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:47.242817   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:47.279395   72639 cri.go:89] found id: ""
	I1014 15:05:47.279421   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.279428   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:47.279434   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:47.279495   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:47.315002   72639 cri.go:89] found id: ""
	I1014 15:05:47.315032   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.315043   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:47.315050   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:47.315120   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:47.354133   72639 cri.go:89] found id: ""
	I1014 15:05:47.354162   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.354173   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:47.354180   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:47.354245   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:47.389394   72639 cri.go:89] found id: ""
	I1014 15:05:47.389419   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.389427   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:47.389439   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:47.389498   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:47.426564   72639 cri.go:89] found id: ""
	I1014 15:05:47.426592   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.426619   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:47.426627   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:47.426676   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:47.466953   72639 cri.go:89] found id: ""
	I1014 15:05:47.466980   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.466989   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:47.466996   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:47.467065   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:47.508563   72639 cri.go:89] found id: ""
	I1014 15:05:47.508595   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.508605   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:47.508613   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:47.508665   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:47.548974   72639 cri.go:89] found id: ""
	I1014 15:05:47.549002   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.549012   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:47.549022   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:47.549036   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:47.604768   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:47.604799   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:47.619681   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:47.619717   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:47.692479   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:47.692506   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:47.692522   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:47.773711   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:47.773751   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:44.637916   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:47.137070   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:46.566472   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:48.566743   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:46.809406   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:48.811359   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:51.309691   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:50.314509   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:50.330883   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:50.330958   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:50.375090   72639 cri.go:89] found id: ""
	I1014 15:05:50.375121   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.375133   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:50.375140   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:50.375201   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:50.415000   72639 cri.go:89] found id: ""
	I1014 15:05:50.415031   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.415041   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:50.415048   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:50.415099   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:50.453937   72639 cri.go:89] found id: ""
	I1014 15:05:50.453967   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.453976   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:50.453983   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:50.454047   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:50.498752   72639 cri.go:89] found id: ""
	I1014 15:05:50.498778   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.498785   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:50.498790   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:50.498858   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:50.537819   72639 cri.go:89] found id: ""
	I1014 15:05:50.537855   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.537864   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:50.537871   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:50.537920   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:50.577141   72639 cri.go:89] found id: ""
	I1014 15:05:50.577168   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.577179   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:50.577186   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:50.577250   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:50.612462   72639 cri.go:89] found id: ""
	I1014 15:05:50.612504   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.612527   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:50.612535   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:50.612597   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:50.648816   72639 cri.go:89] found id: ""
	I1014 15:05:50.648845   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.648855   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:50.648866   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:50.648879   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:50.662546   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:50.662578   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:50.733128   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:50.733152   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:50.733166   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:50.810884   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:50.810913   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:50.855878   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:50.855905   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:49.637103   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:52.137615   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:50.567300   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:53.066883   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:53.810090   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:56.312861   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:53.413608   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:53.428380   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:53.428453   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:53.463440   72639 cri.go:89] found id: ""
	I1014 15:05:53.463464   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.463473   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:53.463479   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:53.463534   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:53.499024   72639 cri.go:89] found id: ""
	I1014 15:05:53.499050   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.499058   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:53.499064   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:53.499121   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:53.534396   72639 cri.go:89] found id: ""
	I1014 15:05:53.534425   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.534435   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:53.534442   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:53.534504   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:53.571396   72639 cri.go:89] found id: ""
	I1014 15:05:53.571422   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.571432   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:53.571439   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:53.571496   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:53.606219   72639 cri.go:89] found id: ""
	I1014 15:05:53.606247   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.606254   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:53.606260   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:53.606309   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:53.644906   72639 cri.go:89] found id: ""
	I1014 15:05:53.644929   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.644938   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:53.644945   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:53.645005   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:53.684764   72639 cri.go:89] found id: ""
	I1014 15:05:53.684795   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.684808   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:53.684817   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:53.684872   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:53.720559   72639 cri.go:89] found id: ""
	I1014 15:05:53.720587   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.720596   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:53.720605   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:53.720626   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:53.773759   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:53.773798   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:53.787688   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:53.787717   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:53.863141   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:53.863163   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:53.863176   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:53.942949   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:53.942989   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:56.487207   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:56.500670   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:56.500730   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:56.533851   72639 cri.go:89] found id: ""
	I1014 15:05:56.533882   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.533894   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:56.533901   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:56.533964   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:56.573169   72639 cri.go:89] found id: ""
	I1014 15:05:56.573194   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.573201   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:56.573207   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:56.573260   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:56.608110   72639 cri.go:89] found id: ""
	I1014 15:05:56.608138   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.608151   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:56.608158   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:56.608218   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:56.646030   72639 cri.go:89] found id: ""
	I1014 15:05:56.646054   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.646061   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:56.646067   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:56.646112   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:56.689427   72639 cri.go:89] found id: ""
	I1014 15:05:56.689455   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.689465   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:56.689473   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:56.689528   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:56.723831   72639 cri.go:89] found id: ""
	I1014 15:05:56.723856   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.723865   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:56.723871   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:56.723928   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:56.756700   72639 cri.go:89] found id: ""
	I1014 15:05:56.756725   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.756734   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:56.756741   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:56.756808   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:56.788201   72639 cri.go:89] found id: ""
	I1014 15:05:56.788228   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.788235   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:56.788242   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:56.788253   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:56.847840   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:56.847876   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:56.861984   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:56.862016   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:56.933190   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:56.933214   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:56.933226   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:57.015909   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:57.015958   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:54.636591   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:56.638712   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:59.137008   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:55.566153   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:57.566963   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:00.067261   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:58.810164   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:00.811078   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:59.559421   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:59.575593   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:59.575673   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:59.611369   72639 cri.go:89] found id: ""
	I1014 15:05:59.611399   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.611409   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:59.611416   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:59.611485   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:59.645786   72639 cri.go:89] found id: ""
	I1014 15:05:59.645817   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.645827   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:59.645834   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:59.645895   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:59.681463   72639 cri.go:89] found id: ""
	I1014 15:05:59.681491   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.681499   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:59.681504   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:59.681553   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:59.723738   72639 cri.go:89] found id: ""
	I1014 15:05:59.723767   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.723775   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:59.723782   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:59.723845   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:59.763890   72639 cri.go:89] found id: ""
	I1014 15:05:59.763919   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.763958   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:59.763966   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:59.764027   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:59.802981   72639 cri.go:89] found id: ""
	I1014 15:05:59.803007   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.803015   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:59.803021   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:59.803074   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:59.841887   72639 cri.go:89] found id: ""
	I1014 15:05:59.841916   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.841927   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:59.841934   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:59.841989   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:59.877190   72639 cri.go:89] found id: ""
	I1014 15:05:59.877221   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.877231   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:59.877240   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:59.877254   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:59.890838   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:59.890864   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:59.970122   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:59.970147   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:59.970163   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:00.058994   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:00.059032   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:00.103227   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:00.103262   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:02.655437   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:02.671240   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:02.671307   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:02.708826   72639 cri.go:89] found id: ""
	I1014 15:06:02.708859   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.708871   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:02.708879   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:02.708943   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:02.744504   72639 cri.go:89] found id: ""
	I1014 15:06:02.744535   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.744546   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:02.744553   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:02.744615   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:02.781144   72639 cri.go:89] found id: ""
	I1014 15:06:02.781180   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.781193   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:02.781201   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:02.781281   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:02.819527   72639 cri.go:89] found id: ""
	I1014 15:06:02.819558   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.819567   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:02.819572   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:02.819630   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:02.855653   72639 cri.go:89] found id: ""
	I1014 15:06:02.855683   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.855693   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:02.855700   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:02.855761   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:02.900843   72639 cri.go:89] found id: ""
	I1014 15:06:02.900876   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.900888   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:02.900896   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:02.900961   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:02.941812   72639 cri.go:89] found id: ""
	I1014 15:06:02.941840   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.941851   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:02.941857   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:02.941919   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:02.980213   72639 cri.go:89] found id: ""
	I1014 15:06:02.980238   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.980246   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:02.980253   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:02.980265   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:00.130683   72173 pod_ready.go:82] duration metric: took 4m0.000550021s for pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace to be "Ready" ...
	E1014 15:06:00.130707   72173 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace to be "Ready" (will not retry!)
	I1014 15:06:00.130723   72173 pod_ready.go:39] duration metric: took 4m13.708579322s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:06:00.130753   72173 kubeadm.go:597] duration metric: took 4m21.979284634s to restartPrimaryControlPlane
	W1014 15:06:00.130836   72173 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1014 15:06:00.130870   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 15:06:02.566183   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:05.066638   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:03.309953   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:05.311484   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:03.034263   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:03.034301   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:03.048574   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:03.048606   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:03.121902   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:03.121925   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:03.121939   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:03.197407   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:03.197445   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:05.737723   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:05.751892   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:05.751959   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:05.789209   72639 cri.go:89] found id: ""
	I1014 15:06:05.789235   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.789242   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:05.789247   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:05.789294   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:05.826189   72639 cri.go:89] found id: ""
	I1014 15:06:05.826220   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.826229   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:05.826236   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:05.826344   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:05.864264   72639 cri.go:89] found id: ""
	I1014 15:06:05.864297   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.864308   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:05.864314   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:05.864371   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:05.899697   72639 cri.go:89] found id: ""
	I1014 15:06:05.899724   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.899732   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:05.899737   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:05.899784   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:05.939552   72639 cri.go:89] found id: ""
	I1014 15:06:05.939583   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.939593   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:05.939601   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:05.939668   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:05.999732   72639 cri.go:89] found id: ""
	I1014 15:06:05.999759   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.999770   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:05.999776   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:05.999834   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:06.036228   72639 cri.go:89] found id: ""
	I1014 15:06:06.036259   72639 logs.go:282] 0 containers: []
	W1014 15:06:06.036276   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:06.036284   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:06.036343   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:06.071744   72639 cri.go:89] found id: ""
	I1014 15:06:06.071774   72639 logs.go:282] 0 containers: []
	W1014 15:06:06.071785   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:06.071795   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:06.071808   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:06.125737   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:06.125774   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:06.139150   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:06.139177   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:06.206731   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:06.206757   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:06.206773   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:06.287183   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:06.287218   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:07.565983   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:10.065897   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:07.809832   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:10.309290   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:08.827345   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:08.841290   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:08.841384   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:08.877789   72639 cri.go:89] found id: ""
	I1014 15:06:08.877815   72639 logs.go:282] 0 containers: []
	W1014 15:06:08.877824   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:08.877832   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:08.877895   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:08.912491   72639 cri.go:89] found id: ""
	I1014 15:06:08.912517   72639 logs.go:282] 0 containers: []
	W1014 15:06:08.912525   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:08.912530   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:08.912586   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:08.948727   72639 cri.go:89] found id: ""
	I1014 15:06:08.948755   72639 logs.go:282] 0 containers: []
	W1014 15:06:08.948765   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:08.948773   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:08.948837   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:08.984397   72639 cri.go:89] found id: ""
	I1014 15:06:08.984428   72639 logs.go:282] 0 containers: []
	W1014 15:06:08.984440   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:08.984448   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:08.984498   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:09.019222   72639 cri.go:89] found id: ""
	I1014 15:06:09.019250   72639 logs.go:282] 0 containers: []
	W1014 15:06:09.019260   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:09.019268   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:09.019329   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:09.058309   72639 cri.go:89] found id: ""
	I1014 15:06:09.058335   72639 logs.go:282] 0 containers: []
	W1014 15:06:09.058346   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:09.058353   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:09.058415   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:09.096508   72639 cri.go:89] found id: ""
	I1014 15:06:09.096535   72639 logs.go:282] 0 containers: []
	W1014 15:06:09.096544   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:09.096550   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:09.096599   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:09.134564   72639 cri.go:89] found id: ""
	I1014 15:06:09.134611   72639 logs.go:282] 0 containers: []
	W1014 15:06:09.134624   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:09.134635   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:09.134647   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:09.188220   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:09.188254   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:09.203119   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:09.203149   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:09.279357   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:09.279379   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:09.279390   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:09.364219   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:09.364253   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:11.910976   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:11.926067   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:11.926149   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:11.966238   72639 cri.go:89] found id: ""
	I1014 15:06:11.966271   72639 logs.go:282] 0 containers: []
	W1014 15:06:11.966282   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:11.966289   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:11.966350   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:12.002580   72639 cri.go:89] found id: ""
	I1014 15:06:12.002617   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.002630   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:12.002637   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:12.002698   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:12.037014   72639 cri.go:89] found id: ""
	I1014 15:06:12.037037   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.037046   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:12.037051   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:12.037111   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:12.070937   72639 cri.go:89] found id: ""
	I1014 15:06:12.070957   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.070965   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:12.070970   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:12.071019   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:12.104920   72639 cri.go:89] found id: ""
	I1014 15:06:12.104949   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.104960   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:12.104967   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:12.105026   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:12.142498   72639 cri.go:89] found id: ""
	I1014 15:06:12.142530   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.142544   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:12.142555   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:12.142628   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:12.179590   72639 cri.go:89] found id: ""
	I1014 15:06:12.179613   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.179621   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:12.179627   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:12.179675   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:12.213947   72639 cri.go:89] found id: ""
	I1014 15:06:12.213973   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.213981   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:12.213989   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:12.213998   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:12.268214   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:12.268257   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:12.283561   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:12.283594   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:12.382344   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:12.382367   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:12.382377   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:12.469818   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:12.469854   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:12.066154   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:14.565962   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:12.310167   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:14.810273   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:15.011529   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:15.025355   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:15.025423   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:15.060996   72639 cri.go:89] found id: ""
	I1014 15:06:15.061028   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.061040   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:15.061047   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:15.061120   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:15.103050   72639 cri.go:89] found id: ""
	I1014 15:06:15.103074   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.103082   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:15.103088   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:15.103140   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:15.140095   72639 cri.go:89] found id: ""
	I1014 15:06:15.140122   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.140132   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:15.140139   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:15.140207   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:15.174612   72639 cri.go:89] found id: ""
	I1014 15:06:15.174642   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.174654   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:15.174669   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:15.174737   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:15.209116   72639 cri.go:89] found id: ""
	I1014 15:06:15.209142   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.209152   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:15.209160   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:15.209221   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:15.242857   72639 cri.go:89] found id: ""
	I1014 15:06:15.242885   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.242896   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:15.242902   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:15.242966   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:15.283038   72639 cri.go:89] found id: ""
	I1014 15:06:15.283066   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.283076   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:15.283083   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:15.283144   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:15.319577   72639 cri.go:89] found id: ""
	I1014 15:06:15.319604   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.319612   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:15.319622   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:15.319636   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:15.391485   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:15.391506   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:15.391520   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:15.470140   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:15.470192   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:15.513098   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:15.513132   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:15.568275   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:15.568305   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:17.065956   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:19.566207   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:17.308463   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:19.309185   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:21.310841   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:18.085915   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:18.113889   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:18.113958   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:18.167486   72639 cri.go:89] found id: ""
	I1014 15:06:18.167511   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.167519   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:18.167525   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:18.167568   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:18.230244   72639 cri.go:89] found id: ""
	I1014 15:06:18.230273   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.230283   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:18.230291   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:18.230351   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:18.264223   72639 cri.go:89] found id: ""
	I1014 15:06:18.264252   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.264261   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:18.264268   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:18.264332   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:18.298719   72639 cri.go:89] found id: ""
	I1014 15:06:18.298750   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.298762   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:18.298770   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:18.298843   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:18.335113   72639 cri.go:89] found id: ""
	I1014 15:06:18.335140   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.335147   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:18.335153   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:18.335212   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:18.373690   72639 cri.go:89] found id: ""
	I1014 15:06:18.373721   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.373736   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:18.373743   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:18.373792   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:18.411138   72639 cri.go:89] found id: ""
	I1014 15:06:18.411171   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.411182   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:18.411190   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:18.411250   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:18.451281   72639 cri.go:89] found id: ""
	I1014 15:06:18.451306   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.451314   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:18.451323   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:18.451334   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:18.502141   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:18.502178   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:18.517449   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:18.517476   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:18.586737   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:18.586760   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:18.586776   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:18.670234   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:18.670270   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:21.210200   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:21.222998   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:21.223053   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:21.257132   72639 cri.go:89] found id: ""
	I1014 15:06:21.257160   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.257167   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:21.257174   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:21.257237   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:21.290905   72639 cri.go:89] found id: ""
	I1014 15:06:21.290933   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.290945   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:21.290952   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:21.291007   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:21.331067   72639 cri.go:89] found id: ""
	I1014 15:06:21.331098   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.331108   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:21.331128   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:21.331178   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:21.370042   72639 cri.go:89] found id: ""
	I1014 15:06:21.370069   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.370077   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:21.370083   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:21.370141   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:21.414900   72639 cri.go:89] found id: ""
	I1014 15:06:21.414920   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.414932   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:21.414938   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:21.414985   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:21.452914   72639 cri.go:89] found id: ""
	I1014 15:06:21.452941   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.452952   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:21.452960   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:21.453022   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:21.486725   72639 cri.go:89] found id: ""
	I1014 15:06:21.486752   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.486763   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:21.486770   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:21.486831   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:21.524012   72639 cri.go:89] found id: ""
	I1014 15:06:21.524034   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.524042   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:21.524049   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:21.524059   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:21.603238   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:21.603279   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:21.645655   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:21.645689   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:21.701053   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:21.701092   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:21.715515   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:21.715542   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:21.781831   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:22.067051   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:24.567173   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:21.810342   72390 pod_ready.go:82] duration metric: took 4m0.007657098s for pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace to be "Ready" ...
	E1014 15:06:21.810365   72390 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1014 15:06:21.810382   72390 pod_ready.go:39] duration metric: took 4m7.92113061s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:06:21.810401   72390 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:06:21.810433   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:21.810488   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:21.856565   72390 cri.go:89] found id: "a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f"
	I1014 15:06:21.856587   72390 cri.go:89] found id: ""
	I1014 15:06:21.856594   72390 logs.go:282] 1 containers: [a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f]
	I1014 15:06:21.856654   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:21.861036   72390 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:21.861091   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:21.898486   72390 cri.go:89] found id: "0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69"
	I1014 15:06:21.898517   72390 cri.go:89] found id: ""
	I1014 15:06:21.898528   72390 logs.go:282] 1 containers: [0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69]
	I1014 15:06:21.898587   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:21.903145   72390 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:21.903245   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:21.941127   72390 cri.go:89] found id: "6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1"
	I1014 15:06:21.941164   72390 cri.go:89] found id: ""
	I1014 15:06:21.941173   72390 logs.go:282] 1 containers: [6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1]
	I1014 15:06:21.941232   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:21.945584   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:21.945658   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:21.994370   72390 cri.go:89] found id: "be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa"
	I1014 15:06:21.994398   72390 cri.go:89] found id: ""
	I1014 15:06:21.994407   72390 logs.go:282] 1 containers: [be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa]
	I1014 15:06:21.994454   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:21.998498   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:21.998547   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:22.037415   72390 cri.go:89] found id: "8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42"
	I1014 15:06:22.037443   72390 cri.go:89] found id: ""
	I1014 15:06:22.037453   72390 logs.go:282] 1 containers: [8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42]
	I1014 15:06:22.037507   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:22.041882   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:22.041947   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:22.079219   72390 cri.go:89] found id: "7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4"
	I1014 15:06:22.079243   72390 cri.go:89] found id: ""
	I1014 15:06:22.079252   72390 logs.go:282] 1 containers: [7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4]
	I1014 15:06:22.079319   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:22.083373   72390 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:22.083432   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:22.120795   72390 cri.go:89] found id: ""
	I1014 15:06:22.120818   72390 logs.go:282] 0 containers: []
	W1014 15:06:22.120825   72390 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:22.120832   72390 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1014 15:06:22.120889   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1014 15:06:22.158545   72390 cri.go:89] found id: "54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81"
	I1014 15:06:22.158571   72390 cri.go:89] found id: "48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076"
	I1014 15:06:22.158577   72390 cri.go:89] found id: ""
	I1014 15:06:22.158586   72390 logs.go:282] 2 containers: [54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81 48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076]
	I1014 15:06:22.158662   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:22.162500   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:22.166734   72390 logs.go:123] Gathering logs for storage-provisioner [48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076] ...
	I1014 15:06:22.166759   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076"
	I1014 15:06:22.202711   72390 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:22.202736   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:22.279594   72390 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:22.279635   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:22.293836   72390 logs.go:123] Gathering logs for coredns [6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1] ...
	I1014 15:06:22.293863   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1"
	I1014 15:06:22.335451   72390 logs.go:123] Gathering logs for kube-scheduler [be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa] ...
	I1014 15:06:22.335478   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa"
	I1014 15:06:22.374244   72390 logs.go:123] Gathering logs for kube-proxy [8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42] ...
	I1014 15:06:22.374274   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42"
	I1014 15:06:22.422538   72390 logs.go:123] Gathering logs for kube-controller-manager [7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4] ...
	I1014 15:06:22.422567   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4"
	I1014 15:06:22.486973   72390 logs.go:123] Gathering logs for storage-provisioner [54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81] ...
	I1014 15:06:22.487009   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81"
	I1014 15:06:22.528871   72390 logs.go:123] Gathering logs for container status ...
	I1014 15:06:22.528899   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:22.575947   72390 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:22.575982   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 15:06:22.713356   72390 logs.go:123] Gathering logs for kube-apiserver [a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f] ...
	I1014 15:06:22.713387   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f"
	I1014 15:06:22.760315   72390 logs.go:123] Gathering logs for etcd [0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69] ...
	I1014 15:06:22.760348   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69"
	I1014 15:06:22.811144   72390 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:22.811169   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:25.780847   72390 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:25.800698   72390 api_server.go:72] duration metric: took 4m18.640749756s to wait for apiserver process to appear ...
	I1014 15:06:25.800733   72390 api_server.go:88] waiting for apiserver healthz status ...
	I1014 15:06:25.800779   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:25.800845   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:25.841159   72390 cri.go:89] found id: "a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f"
	I1014 15:06:25.841193   72390 cri.go:89] found id: ""
	I1014 15:06:25.841203   72390 logs.go:282] 1 containers: [a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f]
	I1014 15:06:25.841259   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:25.845503   72390 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:25.845560   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:25.884122   72390 cri.go:89] found id: "0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69"
	I1014 15:06:25.884151   72390 cri.go:89] found id: ""
	I1014 15:06:25.884161   72390 logs.go:282] 1 containers: [0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69]
	I1014 15:06:25.884223   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:25.889638   72390 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:25.889700   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:25.931199   72390 cri.go:89] found id: "6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1"
	I1014 15:06:25.931220   72390 cri.go:89] found id: ""
	I1014 15:06:25.931230   72390 logs.go:282] 1 containers: [6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1]
	I1014 15:06:25.931285   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:25.936063   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:25.936127   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:25.979162   72390 cri.go:89] found id: "be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa"
	I1014 15:06:25.979188   72390 cri.go:89] found id: ""
	I1014 15:06:25.979197   72390 logs.go:282] 1 containers: [be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa]
	I1014 15:06:25.979254   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:25.983550   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:25.983611   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:26.021835   72390 cri.go:89] found id: "8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42"
	I1014 15:06:26.021854   72390 cri.go:89] found id: ""
	I1014 15:06:26.021862   72390 logs.go:282] 1 containers: [8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42]
	I1014 15:06:26.021911   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:26.026005   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:26.026073   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:26.067719   72390 cri.go:89] found id: "7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4"
	I1014 15:06:26.067740   72390 cri.go:89] found id: ""
	I1014 15:06:26.067749   72390 logs.go:282] 1 containers: [7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4]
	I1014 15:06:26.067803   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:26.073387   72390 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:26.073453   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:26.116305   72390 cri.go:89] found id: ""
	I1014 15:06:26.116336   72390 logs.go:282] 0 containers: []
	W1014 15:06:26.116349   72390 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:26.116358   72390 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1014 15:06:26.116427   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1014 15:06:26.156959   72390 cri.go:89] found id: "54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81"
	I1014 15:06:26.156985   72390 cri.go:89] found id: "48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076"
	I1014 15:06:26.156991   72390 cri.go:89] found id: ""
	I1014 15:06:26.156999   72390 logs.go:282] 2 containers: [54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81 48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076]
	I1014 15:06:26.157051   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:26.161437   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:26.165696   72390 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:26.165718   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 15:06:26.282026   72390 logs.go:123] Gathering logs for coredns [6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1] ...
	I1014 15:06:26.282056   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1"
	I1014 15:06:26.333504   72390 logs.go:123] Gathering logs for kube-scheduler [be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa] ...
	I1014 15:06:26.333543   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa"
	I1014 15:06:26.376435   72390 logs.go:123] Gathering logs for storage-provisioner [54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81] ...
	I1014 15:06:26.376469   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81"
	I1014 15:06:26.416633   72390 logs.go:123] Gathering logs for storage-provisioner [48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076] ...
	I1014 15:06:26.416660   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076"
	I1014 15:06:26.388546   72173 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.257645941s)
	I1014 15:06:26.388631   72173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:06:26.407118   72173 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:06:26.417718   72173 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:06:26.428364   72173 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:06:26.428391   72173 kubeadm.go:157] found existing configuration files:
	
	I1014 15:06:26.428451   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:06:26.437953   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:06:26.438026   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:06:26.448356   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:06:26.458476   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:06:26.458541   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:06:26.469941   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:06:26.482934   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:06:26.483016   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:06:26.495682   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:06:26.506113   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:06:26.506176   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:06:26.517784   72173 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 15:06:26.568927   72173 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1014 15:06:26.568978   72173 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 15:06:26.685727   72173 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 15:06:26.685855   72173 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 15:06:26.685963   72173 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 15:06:26.693948   72173 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 15:06:26.696177   72173 out.go:235]   - Generating certificates and keys ...
	I1014 15:06:26.696269   72173 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 15:06:26.696318   72173 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 15:06:26.696388   72173 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 15:06:26.696438   72173 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1014 15:06:26.696495   72173 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 15:06:26.696536   72173 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1014 15:06:26.696588   72173 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1014 15:06:26.696639   72173 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1014 15:06:26.696696   72173 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 15:06:26.696760   72173 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 15:06:26.700275   72173 kubeadm.go:310] [certs] Using the existing "sa" key
	I1014 15:06:26.700406   72173 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 15:06:26.831734   72173 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 15:06:27.336318   72173 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 15:06:27.574604   72173 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 15:06:27.681370   72173 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 15:06:27.788769   72173 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 15:06:27.789324   72173 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 15:06:27.791842   72173 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 15:06:24.282018   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:24.295177   72639 kubeadm.go:597] duration metric: took 4m4.450514459s to restartPrimaryControlPlane
	W1014 15:06:24.295255   72639 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1014 15:06:24.295283   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 15:06:27.793786   72173 out.go:235]   - Booting up control plane ...
	I1014 15:06:27.793891   72173 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 15:06:27.793980   72173 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 15:06:27.794089   72173 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 15:06:27.815223   72173 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 15:06:27.821764   72173 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 15:06:27.821817   72173 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 15:06:27.965327   72173 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 15:06:27.965707   72173 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 15:06:28.967332   72173 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001260991s
	I1014 15:06:28.967473   72173 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1014 15:06:29.238014   72639 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.942706631s)
	I1014 15:06:29.238096   72639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:06:29.258804   72639 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:06:29.269440   72639 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:06:29.279613   72639 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:06:29.279633   72639 kubeadm.go:157] found existing configuration files:
	
	I1014 15:06:29.279672   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:06:29.292840   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:06:29.292912   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:06:29.306987   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:06:29.319896   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:06:29.319970   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:06:29.333974   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:06:29.343993   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:06:29.344051   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:06:29.354691   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:06:29.364354   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:06:29.364422   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:06:29.374674   72639 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 15:06:29.452845   72639 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1014 15:06:29.452961   72639 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 15:06:29.618263   72639 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 15:06:29.618446   72639 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 15:06:29.618582   72639 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1014 15:06:29.813387   72639 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 15:06:29.815501   72639 out.go:235]   - Generating certificates and keys ...
	I1014 15:06:29.815610   72639 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 15:06:29.815697   72639 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 15:06:29.815799   72639 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 15:06:29.815879   72639 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1014 15:06:29.815971   72639 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 15:06:29.816039   72639 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1014 15:06:29.816125   72639 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1014 15:06:29.816206   72639 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1014 15:06:29.816307   72639 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 15:06:29.816404   72639 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 15:06:29.816454   72639 kubeadm.go:310] [certs] Using the existing "sa" key
	I1014 15:06:29.816531   72639 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 15:06:29.944505   72639 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 15:06:30.106467   72639 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 15:06:30.226356   72639 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 15:06:30.322169   72639 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 15:06:30.342382   72639 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 15:06:30.343666   72639 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 15:06:30.343736   72639 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 15:06:30.507000   72639 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 15:06:27.066923   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:29.068434   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:26.453659   72390 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:26.453693   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:26.900485   72390 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:26.900518   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:26.925431   72390 logs.go:123] Gathering logs for kube-apiserver [a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f] ...
	I1014 15:06:26.925461   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f"
	I1014 15:06:26.986104   72390 logs.go:123] Gathering logs for etcd [0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69] ...
	I1014 15:06:26.986140   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69"
	I1014 15:06:27.037557   72390 logs.go:123] Gathering logs for kube-proxy [8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42] ...
	I1014 15:06:27.037600   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42"
	I1014 15:06:27.084362   72390 logs.go:123] Gathering logs for kube-controller-manager [7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4] ...
	I1014 15:06:27.084397   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4"
	I1014 15:06:27.138680   72390 logs.go:123] Gathering logs for container status ...
	I1014 15:06:27.138713   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:27.191283   72390 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:27.191314   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:29.761781   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:06:29.769020   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 200:
	ok
	I1014 15:06:29.770210   72390 api_server.go:141] control plane version: v1.31.1
	I1014 15:06:29.770232   72390 api_server.go:131] duration metric: took 3.969490314s to wait for apiserver health ...
	I1014 15:06:29.770242   72390 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 15:06:29.770268   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:29.770328   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:29.827908   72390 cri.go:89] found id: "a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f"
	I1014 15:06:29.827930   72390 cri.go:89] found id: ""
	I1014 15:06:29.827939   72390 logs.go:282] 1 containers: [a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f]
	I1014 15:06:29.827994   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:29.837786   72390 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:29.837864   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:29.877625   72390 cri.go:89] found id: "0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69"
	I1014 15:06:29.877661   72390 cri.go:89] found id: ""
	I1014 15:06:29.877672   72390 logs.go:282] 1 containers: [0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69]
	I1014 15:06:29.877738   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:29.882502   72390 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:29.882578   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:29.923002   72390 cri.go:89] found id: "6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1"
	I1014 15:06:29.923027   72390 cri.go:89] found id: ""
	I1014 15:06:29.923037   72390 logs.go:282] 1 containers: [6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1]
	I1014 15:06:29.923094   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:29.927559   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:29.927621   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:29.966098   72390 cri.go:89] found id: "be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa"
	I1014 15:06:29.966124   72390 cri.go:89] found id: ""
	I1014 15:06:29.966133   72390 logs.go:282] 1 containers: [be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa]
	I1014 15:06:29.966189   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:29.972287   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:29.972371   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:30.024389   72390 cri.go:89] found id: "8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42"
	I1014 15:06:30.024414   72390 cri.go:89] found id: ""
	I1014 15:06:30.024423   72390 logs.go:282] 1 containers: [8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42]
	I1014 15:06:30.024481   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:30.029914   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:30.029976   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:30.085703   72390 cri.go:89] found id: "7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4"
	I1014 15:06:30.085727   72390 cri.go:89] found id: ""
	I1014 15:06:30.085737   72390 logs.go:282] 1 containers: [7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4]
	I1014 15:06:30.085806   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:30.097004   72390 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:30.097098   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:30.147464   72390 cri.go:89] found id: ""
	I1014 15:06:30.147494   72390 logs.go:282] 0 containers: []
	W1014 15:06:30.147505   72390 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:30.147512   72390 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1014 15:06:30.147573   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1014 15:06:30.195003   72390 cri.go:89] found id: "54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81"
	I1014 15:06:30.195030   72390 cri.go:89] found id: "48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076"
	I1014 15:06:30.195036   72390 cri.go:89] found id: ""
	I1014 15:06:30.195045   72390 logs.go:282] 2 containers: [54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81 48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076]
	I1014 15:06:30.195099   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:30.199436   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:30.204079   72390 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:30.204105   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:30.221021   72390 logs.go:123] Gathering logs for kube-apiserver [a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f] ...
	I1014 15:06:30.221049   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f"
	I1014 15:06:30.280979   72390 logs.go:123] Gathering logs for coredns [6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1] ...
	I1014 15:06:30.281013   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1"
	I1014 15:06:30.339261   72390 logs.go:123] Gathering logs for kube-proxy [8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42] ...
	I1014 15:06:30.339291   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42"
	I1014 15:06:30.390034   72390 logs.go:123] Gathering logs for kube-controller-manager [7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4] ...
	I1014 15:06:30.390081   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4"
	I1014 15:06:30.461221   72390 logs.go:123] Gathering logs for storage-provisioner [54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81] ...
	I1014 15:06:30.461262   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81"
	I1014 15:06:30.504100   72390 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:30.504134   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:30.870561   72390 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:30.870629   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:30.942952   72390 logs.go:123] Gathering logs for container status ...
	I1014 15:06:30.942998   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:30.995435   72390 logs.go:123] Gathering logs for etcd [0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69] ...
	I1014 15:06:30.995484   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69"
	I1014 15:06:31.038804   72390 logs.go:123] Gathering logs for kube-scheduler [be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa] ...
	I1014 15:06:31.038839   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa"
	I1014 15:06:31.080187   72390 logs.go:123] Gathering logs for storage-provisioner [48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076] ...
	I1014 15:06:31.080218   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076"
	I1014 15:06:31.122248   72390 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:31.122295   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 15:06:30.509157   72639 out.go:235]   - Booting up control plane ...
	I1014 15:06:30.509293   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 15:06:30.518440   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 15:06:30.520572   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 15:06:30.522337   72639 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 15:06:30.524996   72639 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1014 15:06:33.742510   72390 system_pods.go:59] 8 kube-system pods found
	I1014 15:06:33.742539   72390 system_pods.go:61] "coredns-7c65d6cfc9-994hx" [b0291ce4-5503-4bb1-8e36-d956b115c3ac] Running
	I1014 15:06:33.742546   72390 system_pods.go:61] "etcd-default-k8s-diff-port-201291" [5e359915-fb2e-46d5-a1a8-826341943fc3] Running
	I1014 15:06:33.742552   72390 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-201291" [047bd813-aaab-428e-ab47-12932195c91f] Running
	I1014 15:06:33.742557   72390 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-201291" [6eb0eb91-21ce-4e56-9758-fbd453b0d4df] Running
	I1014 15:06:33.742562   72390 system_pods.go:61] "kube-proxy-rh82t" [1dcd3c39-1bfe-40ac-a012-ea17ea1dfb6d] Running
	I1014 15:06:33.742566   72390 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-201291" [aaeefd23-6adc-4c69-acca-38e3f3172b2e] Running
	I1014 15:06:33.742576   72390 system_pods.go:61] "metrics-server-6867b74b74-bcrqs" [508697cd-cf31-4078-8985-5c0b77966695] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:06:33.742582   72390 system_pods.go:61] "storage-provisioner" [62925b5e-ec1d-4d5b-aa70-a4fc555db52d] Running
	I1014 15:06:33.742615   72390 system_pods.go:74] duration metric: took 3.972347536s to wait for pod list to return data ...
	I1014 15:06:33.742628   72390 default_sa.go:34] waiting for default service account to be created ...
	I1014 15:06:33.744532   72390 default_sa.go:45] found service account: "default"
	I1014 15:06:33.744551   72390 default_sa.go:55] duration metric: took 1.918153ms for default service account to be created ...
	I1014 15:06:33.744558   72390 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 15:06:33.750292   72390 system_pods.go:86] 8 kube-system pods found
	I1014 15:06:33.750315   72390 system_pods.go:89] "coredns-7c65d6cfc9-994hx" [b0291ce4-5503-4bb1-8e36-d956b115c3ac] Running
	I1014 15:06:33.750320   72390 system_pods.go:89] "etcd-default-k8s-diff-port-201291" [5e359915-fb2e-46d5-a1a8-826341943fc3] Running
	I1014 15:06:33.750324   72390 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-201291" [047bd813-aaab-428e-ab47-12932195c91f] Running
	I1014 15:06:33.750329   72390 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-201291" [6eb0eb91-21ce-4e56-9758-fbd453b0d4df] Running
	I1014 15:06:33.750332   72390 system_pods.go:89] "kube-proxy-rh82t" [1dcd3c39-1bfe-40ac-a012-ea17ea1dfb6d] Running
	I1014 15:06:33.750335   72390 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-201291" [aaeefd23-6adc-4c69-acca-38e3f3172b2e] Running
	I1014 15:06:33.750341   72390 system_pods.go:89] "metrics-server-6867b74b74-bcrqs" [508697cd-cf31-4078-8985-5c0b77966695] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:06:33.750346   72390 system_pods.go:89] "storage-provisioner" [62925b5e-ec1d-4d5b-aa70-a4fc555db52d] Running
	I1014 15:06:33.750352   72390 system_pods.go:126] duration metric: took 5.790549ms to wait for k8s-apps to be running ...
	I1014 15:06:33.750358   72390 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 15:06:33.750398   72390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:06:33.770342   72390 system_svc.go:56] duration metric: took 19.978034ms WaitForService to wait for kubelet
	I1014 15:06:33.770370   72390 kubeadm.go:582] duration metric: took 4m26.610427104s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 15:06:33.770392   72390 node_conditions.go:102] verifying NodePressure condition ...
	I1014 15:06:33.774149   72390 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 15:06:33.774176   72390 node_conditions.go:123] node cpu capacity is 2
	I1014 15:06:33.774190   72390 node_conditions.go:105] duration metric: took 3.792746ms to run NodePressure ...
	I1014 15:06:33.774203   72390 start.go:241] waiting for startup goroutines ...
	I1014 15:06:33.774217   72390 start.go:246] waiting for cluster config update ...
	I1014 15:06:33.774232   72390 start.go:255] writing updated cluster config ...
	I1014 15:06:33.774560   72390 ssh_runner.go:195] Run: rm -f paused
	I1014 15:06:33.823879   72390 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1014 15:06:33.825962   72390 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-201291" cluster and "default" namespace by default
	I1014 15:06:33.976430   72173 kubeadm.go:310] [api-check] The API server is healthy after 5.00773575s
	I1014 15:06:33.990496   72173 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 15:06:34.010821   72173 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 15:06:34.051244   72173 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 15:06:34.051513   72173 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-989166 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 15:06:34.066447   72173 kubeadm.go:310] [bootstrap-token] Using token: 46olqw.t0lfd7bmyz0olhbh
	I1014 15:06:34.067925   72173 out.go:235]   - Configuring RBAC rules ...
	I1014 15:06:34.068073   72173 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 15:06:34.077775   72173 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 15:06:34.097676   72173 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 15:06:34.103212   72173 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 15:06:34.112640   72173 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 15:06:34.119886   72173 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 15:06:34.382372   72173 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 15:06:34.825514   72173 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1014 15:06:35.383856   72173 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1014 15:06:35.383877   72173 kubeadm.go:310] 
	I1014 15:06:35.383939   72173 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1014 15:06:35.383976   72173 kubeadm.go:310] 
	I1014 15:06:35.384094   72173 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1014 15:06:35.384103   72173 kubeadm.go:310] 
	I1014 15:06:35.384136   72173 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1014 15:06:35.384223   72173 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 15:06:35.384286   72173 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 15:06:35.384311   72173 kubeadm.go:310] 
	I1014 15:06:35.384414   72173 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1014 15:06:35.384430   72173 kubeadm.go:310] 
	I1014 15:06:35.384499   72173 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 15:06:35.384512   72173 kubeadm.go:310] 
	I1014 15:06:35.384597   72173 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1014 15:06:35.384685   72173 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 15:06:35.384744   72173 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 15:06:35.384750   72173 kubeadm.go:310] 
	I1014 15:06:35.384821   72173 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 15:06:35.384928   72173 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1014 15:06:35.384940   72173 kubeadm.go:310] 
	I1014 15:06:35.385047   72173 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 46olqw.t0lfd7bmyz0olhbh \
	I1014 15:06:35.385192   72173 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 \
	I1014 15:06:35.385224   72173 kubeadm.go:310] 	--control-plane 
	I1014 15:06:35.385231   72173 kubeadm.go:310] 
	I1014 15:06:35.385322   72173 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1014 15:06:35.385334   72173 kubeadm.go:310] 
	I1014 15:06:35.385449   72173 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 46olqw.t0lfd7bmyz0olhbh \
	I1014 15:06:35.385588   72173 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 
	I1014 15:06:35.386604   72173 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 15:06:35.386674   72173 cni.go:84] Creating CNI manager for ""
	I1014 15:06:35.386689   72173 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:06:35.388617   72173 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 15:06:31.069009   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:33.565864   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:35.390017   72173 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 15:06:35.402242   72173 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 15:06:35.428958   72173 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 15:06:35.429016   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:35.429080   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-989166 minikube.k8s.io/updated_at=2024_10_14T15_06_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=embed-certs-989166 minikube.k8s.io/primary=true
	I1014 15:06:35.475775   72173 ops.go:34] apiserver oom_adj: -16
	I1014 15:06:35.645234   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:36.145613   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:36.646197   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:37.145401   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:37.645956   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:38.145978   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:38.645292   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:39.145444   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:39.646019   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:39.869659   72173 kubeadm.go:1113] duration metric: took 4.440701402s to wait for elevateKubeSystemPrivileges
	I1014 15:06:39.869695   72173 kubeadm.go:394] duration metric: took 5m1.76989803s to StartCluster
	I1014 15:06:39.869713   72173 settings.go:142] acquiring lock: {Name:mk9f6f6b9dc8c3435472077cbb9091b8e648d1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:06:39.869797   72173 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 15:06:39.872564   72173 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/kubeconfig: {Name:mk17dd47b52fd7a8ee17f563c35b22ef1b7788f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:06:39.872947   72173 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 15:06:39.873165   72173 config.go:182] Loaded profile config "embed-certs-989166": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 15:06:39.873085   72173 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 15:06:39.873246   72173 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-989166"
	I1014 15:06:39.873256   72173 addons.go:69] Setting metrics-server=true in profile "embed-certs-989166"
	I1014 15:06:39.873273   72173 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-989166"
	I1014 15:06:39.873272   72173 addons.go:69] Setting default-storageclass=true in profile "embed-certs-989166"
	I1014 15:06:39.873319   72173 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-989166"
	W1014 15:06:39.873282   72173 addons.go:243] addon storage-provisioner should already be in state true
	I1014 15:06:39.873417   72173 host.go:66] Checking if "embed-certs-989166" exists ...
	I1014 15:06:39.873282   72173 addons.go:234] Setting addon metrics-server=true in "embed-certs-989166"
	W1014 15:06:39.873476   72173 addons.go:243] addon metrics-server should already be in state true
	I1014 15:06:39.873504   72173 host.go:66] Checking if "embed-certs-989166" exists ...
	I1014 15:06:39.873875   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.873888   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.873920   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.873947   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.873986   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.874050   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.874921   72173 out.go:177] * Verifying Kubernetes components...
	I1014 15:06:39.876972   72173 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:06:39.893341   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41669
	I1014 15:06:39.893367   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41843
	I1014 15:06:39.893341   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39139
	I1014 15:06:39.893905   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.893915   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.894023   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.894471   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.894493   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.894651   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.894677   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.894713   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.894731   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.894942   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.895073   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.895563   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.895593   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.895778   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.895970   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetState
	I1014 15:06:39.896249   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.896293   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.899661   72173 addons.go:234] Setting addon default-storageclass=true in "embed-certs-989166"
	W1014 15:06:39.899685   72173 addons.go:243] addon default-storageclass should already be in state true
	I1014 15:06:39.899714   72173 host.go:66] Checking if "embed-certs-989166" exists ...
	I1014 15:06:39.900088   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.900131   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.912591   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39805
	I1014 15:06:39.913089   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.913630   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.913652   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.914099   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.914287   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetState
	I1014 15:06:39.914839   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39111
	I1014 15:06:39.915288   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.915783   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.915802   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.916147   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.916171   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:06:39.916382   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetState
	I1014 15:06:39.917766   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:06:39.917796   72173 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:06:39.919192   72173 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1014 15:06:35.567508   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:38.065792   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:40.066618   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:39.919297   72173 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 15:06:39.919320   72173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 15:06:39.919339   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:06:39.920468   72173 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1014 15:06:39.920489   72173 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1014 15:06:39.920507   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:06:39.921603   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45255
	I1014 15:06:39.921970   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.922502   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.922525   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.922994   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.923333   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:06:39.923585   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.923627   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.923826   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:06:39.923846   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:06:39.923876   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:06:39.924028   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:06:39.924157   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:06:39.924270   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:06:39.924291   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:06:39.924310   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:06:39.924397   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:06:39.924674   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:06:39.924840   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:06:39.925027   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:06:39.925201   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:06:39.945435   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40911
	I1014 15:06:39.945958   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.946468   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.946497   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.946855   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.947023   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetState
	I1014 15:06:39.948734   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:06:39.948924   72173 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 15:06:39.948942   72173 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 15:06:39.948966   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:06:39.951019   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:06:39.951418   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:06:39.951437   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:06:39.951570   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:06:39.951742   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:06:39.951918   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:06:39.952058   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:06:40.129893   72173 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:06:40.215427   72173 node_ready.go:35] waiting up to 6m0s for node "embed-certs-989166" to be "Ready" ...
	I1014 15:06:40.224710   72173 node_ready.go:49] node "embed-certs-989166" has status "Ready":"True"
	I1014 15:06:40.224731   72173 node_ready.go:38] duration metric: took 9.266994ms for node "embed-certs-989166" to be "Ready" ...
	I1014 15:06:40.224742   72173 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:06:40.230651   72173 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:40.394829   72173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 15:06:40.422573   72173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 15:06:40.430300   72173 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1014 15:06:40.430319   72173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1014 15:06:40.503826   72173 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1014 15:06:40.503857   72173 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1014 15:06:40.586087   72173 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 15:06:40.586116   72173 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1014 15:06:40.726605   72173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 15:06:40.887453   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:40.887475   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:40.887809   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Closing plugin on server side
	I1014 15:06:40.887857   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:40.887869   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:40.887886   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:40.887898   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:40.888127   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:40.888150   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:40.888160   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Closing plugin on server side
	I1014 15:06:40.901694   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:40.901717   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:40.902091   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:40.902103   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Closing plugin on server side
	I1014 15:06:40.902111   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:41.352636   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:41.352670   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:41.352963   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Closing plugin on server side
	I1014 15:06:41.353017   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:41.353029   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:41.353036   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:41.353043   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:41.353274   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:41.353302   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:41.578200   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:41.578219   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:41.578484   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:41.578529   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:41.578554   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:41.578588   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:41.578827   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:41.578844   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:41.578854   72173 addons.go:475] Verifying addon metrics-server=true in "embed-certs-989166"
	I1014 15:06:41.581312   72173 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1014 15:06:41.582506   72173 addons.go:510] duration metric: took 1.709432803s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1014 15:06:42.237265   72173 pod_ready.go:103] pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:44.240605   72173 pod_ready.go:103] pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:42.067701   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:44.566134   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:46.738094   72173 pod_ready.go:103] pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:48.739238   72173 pod_ready.go:103] pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:49.238145   72173 pod_ready.go:93] pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:49.238167   72173 pod_ready.go:82] duration metric: took 9.007493385s for pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.238176   72173 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-l95hj" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.243268   72173 pod_ready.go:93] pod "coredns-7c65d6cfc9-l95hj" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:49.243299   72173 pod_ready.go:82] duration metric: took 5.116183ms for pod "coredns-7c65d6cfc9-l95hj" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.243311   72173 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.247979   72173 pod_ready.go:93] pod "etcd-embed-certs-989166" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:49.248001   72173 pod_ready.go:82] duration metric: took 4.682826ms for pod "etcd-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.248009   72173 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.252590   72173 pod_ready.go:93] pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:49.252615   72173 pod_ready.go:82] duration metric: took 4.599399ms for pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.252624   72173 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.257541   72173 pod_ready.go:93] pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:49.257566   72173 pod_ready.go:82] duration metric: took 4.935116ms for pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.257575   72173 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g572s" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:47.064934   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:49.066284   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:49.635873   72173 pod_ready.go:93] pod "kube-proxy-g572s" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:49.635895   72173 pod_ready.go:82] duration metric: took 378.313947ms for pod "kube-proxy-g572s" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.635904   72173 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:50.035141   72173 pod_ready.go:93] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:50.035169   72173 pod_ready.go:82] duration metric: took 399.257073ms for pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:50.035179   72173 pod_ready.go:39] duration metric: took 9.810424567s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:06:50.035195   72173 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:06:50.035258   72173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:50.054964   72173 api_server.go:72] duration metric: took 10.181978114s to wait for apiserver process to appear ...
	I1014 15:06:50.054996   72173 api_server.go:88] waiting for apiserver healthz status ...
	I1014 15:06:50.055020   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:06:50.061606   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 200:
	ok
	I1014 15:06:50.063380   72173 api_server.go:141] control plane version: v1.31.1
	I1014 15:06:50.063411   72173 api_server.go:131] duration metric: took 8.40661ms to wait for apiserver health ...
	I1014 15:06:50.063421   72173 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 15:06:50.239258   72173 system_pods.go:59] 9 kube-system pods found
	I1014 15:06:50.239286   72173 system_pods.go:61] "coredns-7c65d6cfc9-6bmwg" [7cf9ad75-b75b-4cce-aad8-d68a810a5d0a] Running
	I1014 15:06:50.239292   72173 system_pods.go:61] "coredns-7c65d6cfc9-l95hj" [6563de05-ef49-4fa9-bf0b-a826fbc8bb14] Running
	I1014 15:06:50.239295   72173 system_pods.go:61] "etcd-embed-certs-989166" [8d29b784-a336-4cb9-ac24-3e9e129e4f49] Running
	I1014 15:06:50.239299   72173 system_pods.go:61] "kube-apiserver-embed-certs-989166" [a98c0a3d-0fd7-4f02-8d61-93f8cada740e] Running
	I1014 15:06:50.239303   72173 system_pods.go:61] "kube-controller-manager-embed-certs-989166" [e3146331-cd59-4a34-8ca8-c9637acdb687] Running
	I1014 15:06:50.239305   72173 system_pods.go:61] "kube-proxy-g572s" [5d2e4a08-5d05-48ab-8fbe-3bb3fe2f77ab] Running
	I1014 15:06:50.239308   72173 system_pods.go:61] "kube-scheduler-embed-certs-989166" [fd61dc8f-51aa-43ce-8e6b-8be0c50073fc] Running
	I1014 15:06:50.239314   72173 system_pods.go:61] "metrics-server-6867b74b74-jl6pp" [c244e53d-c492-426a-be7f-d405f2defd17] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:06:50.239317   72173 system_pods.go:61] "storage-provisioner" [ad6caa59-bc75-4e8f-8052-86d963b92fe3] Running
	I1014 15:06:50.239325   72173 system_pods.go:74] duration metric: took 175.89649ms to wait for pod list to return data ...
	I1014 15:06:50.239334   72173 default_sa.go:34] waiting for default service account to be created ...
	I1014 15:06:50.435980   72173 default_sa.go:45] found service account: "default"
	I1014 15:06:50.436007   72173 default_sa.go:55] duration metric: took 196.667838ms for default service account to be created ...
	I1014 15:06:50.436017   72173 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 15:06:50.639185   72173 system_pods.go:86] 9 kube-system pods found
	I1014 15:06:50.639224   72173 system_pods.go:89] "coredns-7c65d6cfc9-6bmwg" [7cf9ad75-b75b-4cce-aad8-d68a810a5d0a] Running
	I1014 15:06:50.639234   72173 system_pods.go:89] "coredns-7c65d6cfc9-l95hj" [6563de05-ef49-4fa9-bf0b-a826fbc8bb14] Running
	I1014 15:06:50.639241   72173 system_pods.go:89] "etcd-embed-certs-989166" [8d29b784-a336-4cb9-ac24-3e9e129e4f49] Running
	I1014 15:06:50.639248   72173 system_pods.go:89] "kube-apiserver-embed-certs-989166" [a98c0a3d-0fd7-4f02-8d61-93f8cada740e] Running
	I1014 15:06:50.639254   72173 system_pods.go:89] "kube-controller-manager-embed-certs-989166" [e3146331-cd59-4a34-8ca8-c9637acdb687] Running
	I1014 15:06:50.639262   72173 system_pods.go:89] "kube-proxy-g572s" [5d2e4a08-5d05-48ab-8fbe-3bb3fe2f77ab] Running
	I1014 15:06:50.639269   72173 system_pods.go:89] "kube-scheduler-embed-certs-989166" [fd61dc8f-51aa-43ce-8e6b-8be0c50073fc] Running
	I1014 15:06:50.639283   72173 system_pods.go:89] "metrics-server-6867b74b74-jl6pp" [c244e53d-c492-426a-be7f-d405f2defd17] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:06:50.639295   72173 system_pods.go:89] "storage-provisioner" [ad6caa59-bc75-4e8f-8052-86d963b92fe3] Running
	I1014 15:06:50.639309   72173 system_pods.go:126] duration metric: took 203.286322ms to wait for k8s-apps to be running ...
	I1014 15:06:50.639327   72173 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 15:06:50.639388   72173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:06:50.655377   72173 system_svc.go:56] duration metric: took 16.0447ms WaitForService to wait for kubelet
	I1014 15:06:50.655402   72173 kubeadm.go:582] duration metric: took 10.782421893s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 15:06:50.655425   72173 node_conditions.go:102] verifying NodePressure condition ...
	I1014 15:06:50.835507   72173 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 15:06:50.835543   72173 node_conditions.go:123] node cpu capacity is 2
	I1014 15:06:50.835556   72173 node_conditions.go:105] duration metric: took 180.126755ms to run NodePressure ...
	I1014 15:06:50.835570   72173 start.go:241] waiting for startup goroutines ...
	I1014 15:06:50.835580   72173 start.go:246] waiting for cluster config update ...
	I1014 15:06:50.835594   72173 start.go:255] writing updated cluster config ...
	I1014 15:06:50.835924   72173 ssh_runner.go:195] Run: rm -f paused
	I1014 15:06:50.883737   72173 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1014 15:06:50.886200   72173 out.go:177] * Done! kubectl is now configured to use "embed-certs-989166" cluster and "default" namespace by default
	I1014 15:06:51.066344   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:53.566466   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:56.066734   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:58.567007   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:01.066112   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:03.068758   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:05.566174   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:07.566274   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:09.566829   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:10.525694   72639 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1014 15:07:10.526665   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:07:10.526908   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:07:12.066402   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:13.560638   71679 pod_ready.go:82] duration metric: took 4m0.000980901s for pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace to be "Ready" ...
	E1014 15:07:13.560669   71679 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace to be "Ready" (will not retry!)
	I1014 15:07:13.560693   71679 pod_ready.go:39] duration metric: took 4m13.04495779s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:07:13.560725   71679 kubeadm.go:597] duration metric: took 4m21.006404411s to restartPrimaryControlPlane
	W1014 15:07:13.560791   71679 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1014 15:07:13.560823   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 15:07:15.527128   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:07:15.527376   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:07:25.527779   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:07:25.528060   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:07:39.775370   71679 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.214519412s)
	I1014 15:07:39.775448   71679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:07:39.790736   71679 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:07:39.800575   71679 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:07:39.810380   71679 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:07:39.810402   71679 kubeadm.go:157] found existing configuration files:
	
	I1014 15:07:39.810462   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:07:39.819880   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:07:39.819938   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:07:39.830542   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:07:39.840268   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:07:39.840318   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:07:39.849727   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:07:39.858513   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:07:39.858651   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:07:39.869154   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:07:39.878724   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:07:39.878798   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:07:39.888123   71679 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 15:07:39.942676   71679 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1014 15:07:39.942771   71679 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 15:07:40.060558   71679 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 15:07:40.060698   71679 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 15:07:40.060861   71679 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 15:07:40.076085   71679 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 15:07:40.078200   71679 out.go:235]   - Generating certificates and keys ...
	I1014 15:07:40.078301   71679 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 15:07:40.078381   71679 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 15:07:40.078505   71679 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 15:07:40.078620   71679 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1014 15:07:40.078717   71679 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 15:07:40.078794   71679 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1014 15:07:40.078887   71679 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1014 15:07:40.078973   71679 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1014 15:07:40.079069   71679 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 15:07:40.079161   71679 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 15:07:40.079234   71679 kubeadm.go:310] [certs] Using the existing "sa" key
	I1014 15:07:40.079315   71679 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 15:07:40.177082   71679 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 15:07:40.264965   71679 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 15:07:40.415660   71679 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 15:07:40.556759   71679 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 15:07:40.727152   71679 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 15:07:40.727573   71679 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 15:07:40.730409   71679 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 15:07:40.732204   71679 out.go:235]   - Booting up control plane ...
	I1014 15:07:40.732328   71679 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 15:07:40.732440   71679 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 15:07:40.732529   71679 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 15:07:40.751839   71679 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 15:07:40.758034   71679 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 15:07:40.758095   71679 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 15:07:40.895135   71679 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 15:07:40.895254   71679 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 15:07:41.397066   71679 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.194797ms
	I1014 15:07:41.397209   71679 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1014 15:07:46.401247   71679 kubeadm.go:310] [api-check] The API server is healthy after 5.002197966s
	I1014 15:07:46.419134   71679 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 15:07:46.433128   71679 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 15:07:46.477079   71679 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 15:07:46.477289   71679 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-813300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 15:07:46.492703   71679 kubeadm.go:310] [bootstrap-token] Using token: 1vsv04.mf3pqj2ow157sq8h
	I1014 15:07:46.494314   71679 out.go:235]   - Configuring RBAC rules ...
	I1014 15:07:46.494467   71679 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 15:07:46.501090   71679 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 15:07:46.515987   71679 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 15:07:46.522417   71679 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 15:07:46.528612   71679 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 15:07:46.536975   71679 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 15:07:46.810642   71679 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 15:07:47.240531   71679 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1014 15:07:47.810279   71679 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1014 15:07:47.811169   71679 kubeadm.go:310] 
	I1014 15:07:47.811230   71679 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1014 15:07:47.811238   71679 kubeadm.go:310] 
	I1014 15:07:47.811307   71679 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1014 15:07:47.811312   71679 kubeadm.go:310] 
	I1014 15:07:47.811335   71679 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1014 15:07:47.811388   71679 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 15:07:47.811440   71679 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 15:07:47.811447   71679 kubeadm.go:310] 
	I1014 15:07:47.811501   71679 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1014 15:07:47.811507   71679 kubeadm.go:310] 
	I1014 15:07:47.811546   71679 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 15:07:47.811553   71679 kubeadm.go:310] 
	I1014 15:07:47.811600   71679 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1014 15:07:47.811667   71679 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 15:07:47.811755   71679 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 15:07:47.811771   71679 kubeadm.go:310] 
	I1014 15:07:47.811844   71679 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 15:07:47.811912   71679 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1014 15:07:47.811921   71679 kubeadm.go:310] 
	I1014 15:07:47.811999   71679 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1vsv04.mf3pqj2ow157sq8h \
	I1014 15:07:47.812091   71679 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 \
	I1014 15:07:47.812139   71679 kubeadm.go:310] 	--control-plane 
	I1014 15:07:47.812153   71679 kubeadm.go:310] 
	I1014 15:07:47.812231   71679 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1014 15:07:47.812238   71679 kubeadm.go:310] 
	I1014 15:07:47.812306   71679 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1vsv04.mf3pqj2ow157sq8h \
	I1014 15:07:47.812393   71679 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 
	I1014 15:07:47.814071   71679 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 15:07:47.814103   71679 cni.go:84] Creating CNI manager for ""
	I1014 15:07:47.814113   71679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:07:47.816033   71679 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 15:07:45.528527   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:07:45.528768   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:07:47.817325   71679 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 15:07:47.829639   71679 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 15:07:47.847797   71679 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 15:07:47.847857   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:47.847929   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-813300 minikube.k8s.io/updated_at=2024_10_14T15_07_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=no-preload-813300 minikube.k8s.io/primary=true
	I1014 15:07:48.039959   71679 ops.go:34] apiserver oom_adj: -16
	I1014 15:07:48.040095   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:48.540295   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:49.040911   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:49.540233   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:50.040146   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:50.540494   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:51.041033   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:51.540516   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:52.040935   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:52.146854   71679 kubeadm.go:1113] duration metric: took 4.299055033s to wait for elevateKubeSystemPrivileges
	I1014 15:07:52.146890   71679 kubeadm.go:394] duration metric: took 4m59.642546726s to StartCluster
	I1014 15:07:52.146906   71679 settings.go:142] acquiring lock: {Name:mk9f6f6b9dc8c3435472077cbb9091b8e648d1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:07:52.146987   71679 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 15:07:52.148782   71679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/kubeconfig: {Name:mk17dd47b52fd7a8ee17f563c35b22ef1b7788f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:07:52.149067   71679 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.13 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 15:07:52.149168   71679 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 15:07:52.149303   71679 addons.go:69] Setting storage-provisioner=true in profile "no-preload-813300"
	I1014 15:07:52.149333   71679 addons.go:234] Setting addon storage-provisioner=true in "no-preload-813300"
	I1014 15:07:52.149342   71679 config.go:182] Loaded profile config "no-preload-813300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	W1014 15:07:52.149355   71679 addons.go:243] addon storage-provisioner should already be in state true
	I1014 15:07:52.149378   71679 addons.go:69] Setting default-storageclass=true in profile "no-preload-813300"
	I1014 15:07:52.149390   71679 host.go:66] Checking if "no-preload-813300" exists ...
	I1014 15:07:52.149412   71679 addons.go:69] Setting metrics-server=true in profile "no-preload-813300"
	I1014 15:07:52.149447   71679 addons.go:234] Setting addon metrics-server=true in "no-preload-813300"
	W1014 15:07:52.149461   71679 addons.go:243] addon metrics-server should already be in state true
	I1014 15:07:52.149494   71679 host.go:66] Checking if "no-preload-813300" exists ...
	I1014 15:07:52.149421   71679 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-813300"
	I1014 15:07:52.149748   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.149789   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.149861   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.149890   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.149905   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.149928   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.150482   71679 out.go:177] * Verifying Kubernetes components...
	I1014 15:07:52.152252   71679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:07:52.167205   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38341
	I1014 15:07:52.170723   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45457
	I1014 15:07:52.170742   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.170728   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39829
	I1014 15:07:52.171111   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.171302   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.171321   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.171386   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.171678   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.171702   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.171717   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.171900   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.171916   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.172164   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.172243   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.172279   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.172325   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.172386   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetState
	I1014 15:07:52.172868   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.172916   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.175482   71679 addons.go:234] Setting addon default-storageclass=true in "no-preload-813300"
	W1014 15:07:52.175502   71679 addons.go:243] addon default-storageclass should already be in state true
	I1014 15:07:52.175529   71679 host.go:66] Checking if "no-preload-813300" exists ...
	I1014 15:07:52.175763   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.175792   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.190835   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46633
	I1014 15:07:52.191422   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.191767   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39613
	I1014 15:07:52.191901   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35293
	I1014 15:07:52.192010   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.192027   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.192317   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.192436   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.192481   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.192988   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.193010   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.192992   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.193060   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.193474   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.193524   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.193530   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.193563   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.193729   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetState
	I1014 15:07:52.193770   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetState
	I1014 15:07:52.195702   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:07:52.195770   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:07:52.197642   71679 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1014 15:07:52.197652   71679 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:07:52.198957   71679 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1014 15:07:52.198978   71679 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1014 15:07:52.198998   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:07:52.199075   71679 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 15:07:52.199096   71679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 15:07:52.199111   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:07:52.202637   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:07:52.203064   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:07:52.203088   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:07:52.203245   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:07:52.203515   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:07:52.203519   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:07:52.203663   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:07:52.203812   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:07:52.203878   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:07:52.203903   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:07:52.204187   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:07:52.204377   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:07:52.204535   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:07:52.204683   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:07:52.231332   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38395
	I1014 15:07:52.231813   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.232320   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.232344   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.232645   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.232836   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetState
	I1014 15:07:52.234309   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:07:52.234570   71679 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 15:07:52.234585   71679 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 15:07:52.234622   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:07:52.237749   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:07:52.238364   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:07:52.238393   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:07:52.238562   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:07:52.238744   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:07:52.238903   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:07:52.239031   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:07:52.375830   71679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:07:52.401606   71679 node_ready.go:35] waiting up to 6m0s for node "no-preload-813300" to be "Ready" ...
	I1014 15:07:52.431363   71679 node_ready.go:49] node "no-preload-813300" has status "Ready":"True"
	I1014 15:07:52.431393   71679 node_ready.go:38] duration metric: took 29.758277ms for node "no-preload-813300" to be "Ready" ...
	I1014 15:07:52.431405   71679 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:07:52.446747   71679 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fjzn8" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:52.501642   71679 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1014 15:07:52.501664   71679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1014 15:07:52.509733   71679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 15:07:52.515833   71679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 15:07:52.536485   71679 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1014 15:07:52.536508   71679 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1014 15:07:52.622269   71679 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 15:07:52.622299   71679 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1014 15:07:52.702873   71679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 15:07:52.909827   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:52.909865   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:52.910194   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:52.910209   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:52.910235   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:52.910249   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:52.910510   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:52.910525   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:52.918161   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:52.918182   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:52.918473   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:52.918493   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:52.918480   71679 main.go:141] libmachine: (no-preload-813300) DBG | Closing plugin on server side
	I1014 15:07:53.707659   71679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.191781585s)
	I1014 15:07:53.707706   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:53.707719   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:53.708011   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:53.708035   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:53.708052   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:53.708062   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:53.708330   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:53.708346   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:54.060665   71679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.357747934s)
	I1014 15:07:54.060752   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:54.060770   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:54.061069   71679 main.go:141] libmachine: (no-preload-813300) DBG | Closing plugin on server side
	I1014 15:07:54.061153   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:54.061164   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:54.061173   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:54.061184   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:54.062712   71679 main.go:141] libmachine: (no-preload-813300) DBG | Closing plugin on server side
	I1014 15:07:54.062787   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:54.062797   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:54.062811   71679 addons.go:475] Verifying addon metrics-server=true in "no-preload-813300"
	I1014 15:07:54.064762   71679 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1014 15:07:54.066623   71679 addons.go:510] duration metric: took 1.917465271s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1014 15:07:54.454216   71679 pod_ready.go:103] pod "coredns-7c65d6cfc9-fjzn8" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:56.455649   71679 pod_ready.go:93] pod "coredns-7c65d6cfc9-fjzn8" in "kube-system" namespace has status "Ready":"True"
	I1014 15:07:56.455674   71679 pod_ready.go:82] duration metric: took 4.00889709s for pod "coredns-7c65d6cfc9-fjzn8" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:56.455689   71679 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nvpvl" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:58.461687   71679 pod_ready.go:103] pod "coredns-7c65d6cfc9-nvpvl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:59.962360   71679 pod_ready.go:93] pod "coredns-7c65d6cfc9-nvpvl" in "kube-system" namespace has status "Ready":"True"
	I1014 15:07:59.962382   71679 pod_ready.go:82] duration metric: took 3.506686516s for pod "coredns-7c65d6cfc9-nvpvl" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.962391   71679 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.969241   71679 pod_ready.go:93] pod "etcd-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:07:59.969261   71679 pod_ready.go:82] duration metric: took 6.864356ms for pod "etcd-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.969270   71679 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.974810   71679 pod_ready.go:93] pod "kube-apiserver-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:07:59.974828   71679 pod_ready.go:82] duration metric: took 5.552122ms for pod "kube-apiserver-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.974837   71679 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.979555   71679 pod_ready.go:93] pod "kube-controller-manager-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:07:59.979580   71679 pod_ready.go:82] duration metric: took 4.735265ms for pod "kube-controller-manager-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.979592   71679 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-54rrd" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.985111   71679 pod_ready.go:93] pod "kube-proxy-54rrd" in "kube-system" namespace has status "Ready":"True"
	I1014 15:07:59.985138   71679 pod_ready.go:82] duration metric: took 5.538126ms for pod "kube-proxy-54rrd" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.985150   71679 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:08:00.359524   71679 pod_ready.go:93] pod "kube-scheduler-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:08:00.359548   71679 pod_ready.go:82] duration metric: took 374.389838ms for pod "kube-scheduler-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:08:00.359558   71679 pod_ready.go:39] duration metric: took 7.928141116s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:08:00.359575   71679 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:08:00.359626   71679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:08:00.376115   71679 api_server.go:72] duration metric: took 8.22700683s to wait for apiserver process to appear ...
	I1014 15:08:00.376144   71679 api_server.go:88] waiting for apiserver healthz status ...
	I1014 15:08:00.376169   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:08:00.381225   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 200:
	ok
	I1014 15:08:00.382348   71679 api_server.go:141] control plane version: v1.31.1
	I1014 15:08:00.382377   71679 api_server.go:131] duration metric: took 6.225832ms to wait for apiserver health ...
	I1014 15:08:00.382386   71679 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 15:08:00.563350   71679 system_pods.go:59] 9 kube-system pods found
	I1014 15:08:00.563382   71679 system_pods.go:61] "coredns-7c65d6cfc9-fjzn8" [7850936e-8104-4e8f-a4cc-948579963790] Running
	I1014 15:08:00.563386   71679 system_pods.go:61] "coredns-7c65d6cfc9-nvpvl" [d926987d-9c61-4bf6-83e3-97334715e1d5] Running
	I1014 15:08:00.563390   71679 system_pods.go:61] "etcd-no-preload-813300" [e5895ac5-7829-4d8c-b5be-d621dbba78bd] Running
	I1014 15:08:00.563394   71679 system_pods.go:61] "kube-apiserver-no-preload-813300" [a30389db-98c0-49e3-8a9b-f3414e62c09a] Running
	I1014 15:08:00.563399   71679 system_pods.go:61] "kube-controller-manager-no-preload-813300" [f710bd35-f215-4aa1-96a9-fb5be44d04cc] Running
	I1014 15:08:00.563402   71679 system_pods.go:61] "kube-proxy-54rrd" [0c8ab0de-c204-46f5-a725-5dcd9eff59d8] Running
	I1014 15:08:00.563405   71679 system_pods.go:61] "kube-scheduler-no-preload-813300" [5386153a-f569-4332-b448-2a000f7a16bb] Running
	I1014 15:08:00.563412   71679 system_pods.go:61] "metrics-server-6867b74b74-8vfll" [cf3594da-9896-49ed-b47f-5bbea36c9aaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:08:00.563416   71679 system_pods.go:61] "storage-provisioner" [2d79bfdf-bda5-42bf-8ddf-73d7df4855db] Running
	I1014 15:08:00.563424   71679 system_pods.go:74] duration metric: took 181.032852ms to wait for pod list to return data ...
	I1014 15:08:00.563436   71679 default_sa.go:34] waiting for default service account to be created ...
	I1014 15:08:00.760054   71679 default_sa.go:45] found service account: "default"
	I1014 15:08:00.760084   71679 default_sa.go:55] duration metric: took 196.637678ms for default service account to be created ...
	I1014 15:08:00.760095   71679 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 15:08:00.962545   71679 system_pods.go:86] 9 kube-system pods found
	I1014 15:08:00.962577   71679 system_pods.go:89] "coredns-7c65d6cfc9-fjzn8" [7850936e-8104-4e8f-a4cc-948579963790] Running
	I1014 15:08:00.962583   71679 system_pods.go:89] "coredns-7c65d6cfc9-nvpvl" [d926987d-9c61-4bf6-83e3-97334715e1d5] Running
	I1014 15:08:00.962587   71679 system_pods.go:89] "etcd-no-preload-813300" [e5895ac5-7829-4d8c-b5be-d621dbba78bd] Running
	I1014 15:08:00.962591   71679 system_pods.go:89] "kube-apiserver-no-preload-813300" [a30389db-98c0-49e3-8a9b-f3414e62c09a] Running
	I1014 15:08:00.962605   71679 system_pods.go:89] "kube-controller-manager-no-preload-813300" [f710bd35-f215-4aa1-96a9-fb5be44d04cc] Running
	I1014 15:08:00.962609   71679 system_pods.go:89] "kube-proxy-54rrd" [0c8ab0de-c204-46f5-a725-5dcd9eff59d8] Running
	I1014 15:08:00.962613   71679 system_pods.go:89] "kube-scheduler-no-preload-813300" [5386153a-f569-4332-b448-2a000f7a16bb] Running
	I1014 15:08:00.962619   71679 system_pods.go:89] "metrics-server-6867b74b74-8vfll" [cf3594da-9896-49ed-b47f-5bbea36c9aaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:08:00.962623   71679 system_pods.go:89] "storage-provisioner" [2d79bfdf-bda5-42bf-8ddf-73d7df4855db] Running
	I1014 15:08:00.962633   71679 system_pods.go:126] duration metric: took 202.532202ms to wait for k8s-apps to be running ...
	I1014 15:08:00.962640   71679 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 15:08:00.962682   71679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:08:00.980272   71679 system_svc.go:56] duration metric: took 17.624381ms WaitForService to wait for kubelet
	I1014 15:08:00.980310   71679 kubeadm.go:582] duration metric: took 8.831207019s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 15:08:00.980333   71679 node_conditions.go:102] verifying NodePressure condition ...
	I1014 15:08:01.160914   71679 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 15:08:01.160947   71679 node_conditions.go:123] node cpu capacity is 2
	I1014 15:08:01.160961   71679 node_conditions.go:105] duration metric: took 180.622279ms to run NodePressure ...
	I1014 15:08:01.160976   71679 start.go:241] waiting for startup goroutines ...
	I1014 15:08:01.160985   71679 start.go:246] waiting for cluster config update ...
	I1014 15:08:01.161000   71679 start.go:255] writing updated cluster config ...
	I1014 15:08:01.161357   71679 ssh_runner.go:195] Run: rm -f paused
	I1014 15:08:01.212486   71679 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1014 15:08:01.215083   71679 out.go:177] * Done! kubectl is now configured to use "no-preload-813300" cluster and "default" namespace by default
	I1014 15:08:25.530669   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:08:25.530970   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:08:25.530998   72639 kubeadm.go:310] 
	I1014 15:08:25.531059   72639 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1014 15:08:25.531114   72639 kubeadm.go:310] 		timed out waiting for the condition
	I1014 15:08:25.531125   72639 kubeadm.go:310] 
	I1014 15:08:25.531177   72639 kubeadm.go:310] 	This error is likely caused by:
	I1014 15:08:25.531238   72639 kubeadm.go:310] 		- The kubelet is not running
	I1014 15:08:25.531381   72639 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1014 15:08:25.531392   72639 kubeadm.go:310] 
	I1014 15:08:25.531527   72639 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1014 15:08:25.531587   72639 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1014 15:08:25.531633   72639 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1014 15:08:25.531643   72639 kubeadm.go:310] 
	I1014 15:08:25.531766   72639 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1014 15:08:25.531872   72639 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 15:08:25.531891   72639 kubeadm.go:310] 
	I1014 15:08:25.532038   72639 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1014 15:08:25.532174   72639 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 15:08:25.532281   72639 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1014 15:08:25.532377   72639 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1014 15:08:25.532418   72639 kubeadm.go:310] 
	I1014 15:08:25.532543   72639 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 15:08:25.532640   72639 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1014 15:08:25.532742   72639 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1014 15:08:25.532833   72639 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1014 15:08:25.532870   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 15:08:31.003635   72639 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.470741012s)
	I1014 15:08:31.003724   72639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:08:31.018666   72639 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:08:31.029707   72639 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:08:31.029729   72639 kubeadm.go:157] found existing configuration files:
	
	I1014 15:08:31.029776   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:08:31.039554   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:08:31.039625   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:08:31.049748   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:08:31.059618   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:08:31.059682   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:08:31.069369   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:08:31.078321   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:08:31.078385   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:08:31.088006   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:08:31.096681   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:08:31.096742   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:08:31.106269   72639 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 15:08:31.182768   72639 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1014 15:08:31.182833   72639 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 15:08:31.341660   72639 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 15:08:31.341833   72639 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 15:08:31.342008   72639 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1014 15:08:31.538731   72639 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 15:08:31.540933   72639 out.go:235]   - Generating certificates and keys ...
	I1014 15:08:31.541037   72639 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 15:08:31.541124   72639 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 15:08:31.541270   72639 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 15:08:31.541386   72639 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1014 15:08:31.541486   72639 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 15:08:31.541559   72639 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1014 15:08:31.541663   72639 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1014 15:08:31.541750   72639 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1014 15:08:31.542000   72639 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 15:08:31.542534   72639 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 15:08:31.542627   72639 kubeadm.go:310] [certs] Using the existing "sa" key
	I1014 15:08:31.542711   72639 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 15:08:31.847005   72639 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 15:08:32.049586   72639 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 15:08:32.355652   72639 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 15:08:32.511031   72639 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 15:08:32.526310   72639 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 15:08:32.526755   72639 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 15:08:32.526841   72639 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 15:08:32.665898   72639 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 15:08:32.667688   72639 out.go:235]   - Booting up control plane ...
	I1014 15:08:32.667806   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 15:08:32.681232   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 15:08:32.682929   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 15:08:32.683704   72639 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 15:08:32.685936   72639 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1014 15:09:12.687998   72639 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1014 15:09:12.688248   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:09:12.688517   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:09:17.689026   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:09:17.689213   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:09:27.689821   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:09:27.690119   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:09:47.690936   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:09:47.691185   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:10:27.691438   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:10:27.691721   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:10:27.691744   72639 kubeadm.go:310] 
	I1014 15:10:27.691779   72639 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1014 15:10:27.691847   72639 kubeadm.go:310] 		timed out waiting for the condition
	I1014 15:10:27.691867   72639 kubeadm.go:310] 
	I1014 15:10:27.691907   72639 kubeadm.go:310] 	This error is likely caused by:
	I1014 15:10:27.691972   72639 kubeadm.go:310] 		- The kubelet is not running
	I1014 15:10:27.692124   72639 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1014 15:10:27.692136   72639 kubeadm.go:310] 
	I1014 15:10:27.692253   72639 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1014 15:10:27.692311   72639 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1014 15:10:27.692352   72639 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1014 15:10:27.692363   72639 kubeadm.go:310] 
	I1014 15:10:27.692497   72639 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1014 15:10:27.692617   72639 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 15:10:27.692633   72639 kubeadm.go:310] 
	I1014 15:10:27.692787   72639 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1014 15:10:27.692915   72639 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 15:10:27.693051   72639 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1014 15:10:27.693146   72639 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1014 15:10:27.693158   72639 kubeadm.go:310] 
	I1014 15:10:27.693497   72639 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 15:10:27.693627   72639 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1014 15:10:27.693710   72639 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1014 15:10:27.693770   72639 kubeadm.go:394] duration metric: took 8m7.905137486s to StartCluster
	I1014 15:10:27.693808   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:10:27.693863   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:10:27.735373   72639 cri.go:89] found id: ""
	I1014 15:10:27.735410   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.735419   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:10:27.735425   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:10:27.735484   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:10:27.775691   72639 cri.go:89] found id: ""
	I1014 15:10:27.775713   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.775721   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:10:27.775727   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:10:27.775778   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:10:27.811621   72639 cri.go:89] found id: ""
	I1014 15:10:27.811645   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.811653   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:10:27.811658   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:10:27.811718   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:10:27.850894   72639 cri.go:89] found id: ""
	I1014 15:10:27.850917   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.850925   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:10:27.850931   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:10:27.850979   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:10:27.891559   72639 cri.go:89] found id: ""
	I1014 15:10:27.891596   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.891608   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:10:27.891616   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:10:27.891671   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:10:27.929896   72639 cri.go:89] found id: ""
	I1014 15:10:27.929929   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.929942   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:10:27.930002   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:10:27.930096   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:10:27.964801   72639 cri.go:89] found id: ""
	I1014 15:10:27.964828   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.964839   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:10:27.964845   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:10:27.964905   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:10:28.011737   72639 cri.go:89] found id: ""
	I1014 15:10:28.011761   72639 logs.go:282] 0 containers: []
	W1014 15:10:28.011769   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:10:28.011777   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:10:28.011788   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:10:28.088053   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:10:28.088082   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:10:28.088098   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:10:28.214495   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:10:28.214531   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:10:28.254766   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:10:28.254796   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:10:28.304942   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:10:28.304977   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1014 15:10:28.319674   72639 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1014 15:10:28.319729   72639 out.go:270] * 
	W1014 15:10:28.319783   72639 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 15:10:28.319802   72639 out.go:270] * 
	W1014 15:10:28.320716   72639 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 15:10:28.324551   72639 out.go:201] 
	W1014 15:10:28.325905   72639 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 15:10:28.325940   72639 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1014 15:10:28.325985   72639 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1014 15:10:28.327473   72639 out.go:201] 
	
	
	==> CRI-O <==
	Oct 14 15:15:52 embed-certs-989166 crio[711]: time="2024-10-14 15:15:52.889677378Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:85998e1e685102a0fb47b3550ab11a656d2555c1eb88e9739b05f6db0820f72e,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-jl6pp,Uid:c244e53d-c492-426a-be7f-d405f2defd17,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728918401762031674,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-jl6pp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c244e53d-c492-426a-be7f-d405f2defd17,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-14T15:06:41.445122505Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:90a8fa5d83794cb27125b318c759daaecb493b4ead0cf6a8bceeab524e8bbdb7,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:ad6caa59-bc75-4e8f-8052-86d963b92fe3,N
amespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728918401640589801,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad6caa59-bc75-4e8f-8052-86d963b92fe3,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"vol
umes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-10-14T15:06:41.332282577Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8ab2c3a2539215c9a1236476d313b2f87b1053d14c11a9cbc8b3a5cd286b2498,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-6bmwg,Uid:7cf9ad75-b75b-4cce-aad8-d68a810a5d0a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728918400108770375,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-6bmwg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cf9ad75-b75b-4cce-aad8-d68a810a5d0a,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-14T15:06:39.791378347Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b2a087c3065ef67a69b7464a6da796f47042581b1fb803f3a3382a2e9492d729,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-l95hj,Uid:6563de05-ef49-4fa9
-bf0b-a826fbc8bb14,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728918400069435447,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-l95hj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6563de05-ef49-4fa9-bf0b-a826fbc8bb14,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-14T15:06:39.763693680Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b6c06b464ea07d78a2c6d0a74f164f2dafe318e618ceec87ba251c61b87c97cf,Metadata:&PodSandboxMetadata{Name:kube-proxy-g572s,Uid:5d2e4a08-5d05-48ab-8fbe-3bb3fe2f77ab,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728918399737815209,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-g572s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d2e4a08-5d05-48ab-8fbe-3bb3fe2f77ab,k8s-app: kube-proxy,pod-tem
plate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-14T15:06:39.424973921Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3ca4cc5b4ea3021eefda14bbaf610856eab354503e98a9fa4753bb72c36d5d68,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-989166,Uid:d9ff6f2bfff2c52f6a606532fcbf27dc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728918388878090908,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ff6f2bfff2c52f6a606532fcbf27dc,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d9ff6f2bfff2c52f6a606532fcbf27dc,kubernetes.io/config.seen: 2024-10-14T15:06:28.438000363Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dc62fbba3d604cbc5300b0387a7c15263d341eb2a5c97f34f4ab28ccab3cc7d7,Metadata:&PodSandboxM
etadata{Name:kube-scheduler-embed-certs-989166,Uid:9ac42a1687ced5a6942f248383d04a7c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728918388876298700,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ac42a1687ced5a6942f248383d04a7c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9ac42a1687ced5a6942f248383d04a7c,kubernetes.io/config.seen: 2024-10-14T15:06:28.438001746Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:41897ecafacb5ac253b2dab27beb6a84d331d86052346a587246f988f21e9d57,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-989166,Uid:151746586ecdf42f597979a13a5b43e9,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1728918388871454794,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver
-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151746586ecdf42f597979a13a5b43e9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.41:8443,kubernetes.io/config.hash: 151746586ecdf42f597979a13a5b43e9,kubernetes.io/config.seen: 2024-10-14T15:06:28.437998400Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f72862ad45faa6095c364339887da6e857411344efc5129ebd87770b2c794175,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-989166,Uid:bad6ed702edb980f9ab495bd0c87ec1e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728918388869258419,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bad6ed702edb980f9ab495bd0c87ec1e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39
.41:2379,kubernetes.io/config.hash: bad6ed702edb980f9ab495bd0c87ec1e,kubernetes.io/config.seen: 2024-10-14T15:06:28.437993667Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:65a1ca161721b36a5cd15eb9f83602d7bb104fb1f24c48ca114f43d02f79148b,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-989166,Uid:151746586ecdf42f597979a13a5b43e9,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1728918100424246758,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151746586ecdf42f597979a13a5b43e9,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.41:8443,kubernetes.io/config.hash: 151746586ecdf42f597979a13a5b43e9,kubernetes.io/config.seen: 2024-10-14T15:01:39.904036590Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collect
or/interceptors.go:74" id=69944f3d-8945-437d-9cae-259dc1c5a474 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 14 15:15:52 embed-certs-989166 crio[711]: time="2024-10-14 15:15:52.890527359Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ef8a313f-2511-403c-9cd9-f5764393d40a name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:15:52 embed-certs-989166 crio[711]: time="2024-10-14 15:15:52.890596600Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ef8a313f-2511-403c-9cd9-f5764393d40a name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:15:52 embed-certs-989166 crio[711]: time="2024-10-14 15:15:52.890848594Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fdcf89c5b91436e10c03dbc9fea768588d72a4997a958dd457c29075913fe20f,PodSandboxId:90a8fa5d83794cb27125b318c759daaecb493b4ead0cf6a8bceeab524e8bbdb7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728918401741539738,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad6caa59-bc75-4e8f-8052-86d963b92fe3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1596c06b1cc7d137360ae0461eb1800bb226dafeac56ad335816086cb1ff677,PodSandboxId:8ab2c3a2539215c9a1236476d313b2f87b1053d14c11a9cbc8b3a5cd286b2498,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728918400814926802,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6bmwg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cf9ad75-b75b-4cce-aad8-d68a810a5d0a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:881e4e8d79988127a3a53dd208777ee743be837a58404a3cc6ad00d4fbd4ce79,PodSandboxId:b2a087c3065ef67a69b7464a6da796f47042581b1fb803f3a3382a2e9492d729,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728918400860509598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-l95hj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
563de05-ef49-4fa9-bf0b-a826fbc8bb14,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ec492cd0941d666c3ab3edb5bf6b3195aa58f63059016eab320a7e64fccf2f3,PodSandboxId:b6c06b464ea07d78a2c6d0a74f164f2dafe318e618ceec87ba251c61b87c97cf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728918400063608430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g572s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d2e4a08-5d05-48ab-8fbe-3bb3fe2f77ab,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ee6b6b51cbe1c774c0c3ed13264f30b18e97a4de91dacc050b5f6f8ee5d1702,PodSandboxId:f72862ad45faa6095c364339887da6e857411344efc5129ebd87770b2c794175,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728918389132304370,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bad6ed702edb980f9ab495bd0c87ec1e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41c5829ee86da69a6477dc7cb46fb180d63ec89af4631feda9ec441fd74a9381,PodSandboxId:dc62fbba3d604cbc5300b0387a7c15263d341eb2a5c97f34f4ab28ccab3cc7d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728918389143660220,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ac42a1687ced5a6942f248383d04a7c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea51e3357f925bf10bb4111435c3aecfbc82841d01ceab7c2dc8a43cf4f11b2c,PodSandboxId:3ca4cc5b4ea3021eefda14bbaf610856eab354503e98a9fa4753bb72c36d5d68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728918389065825328,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ff6f2bfff2c52f6a606532fcbf27dc,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d04e68f07c0c2c838dc7a38787a9617099bdacc038927b4f03202edbdca0769,PodSandboxId:41897ecafacb5ac253b2dab27beb6a84d331d86052346a587246f988f21e9d57,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728918389037557389,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151746586ecdf42f597979a13a5b43e9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b8ad44dccb258b853ffd70972b32085b80373855ebcf3cba7280b4c90abdb80,PodSandboxId:65a1ca161721b36a5cd15eb9f83602d7bb104fb1f24c48ca114f43d02f79148b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728918101459157703,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151746586ecdf42f597979a13a5b43e9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ef8a313f-2511-403c-9cd9-f5764393d40a name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:15:52 embed-certs-989166 crio[711]: time="2024-10-14 15:15:52.915723836Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=06cd93be-c09c-40a0-b555-6051c481918d name=/runtime.v1.RuntimeService/Version
	Oct 14 15:15:52 embed-certs-989166 crio[711]: time="2024-10-14 15:15:52.915787785Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=06cd93be-c09c-40a0-b555-6051c481918d name=/runtime.v1.RuntimeService/Version
	Oct 14 15:15:52 embed-certs-989166 crio[711]: time="2024-10-14 15:15:52.917041358Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1339adb5-7529-4dd4-8e95-f285728d7f85 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:15:52 embed-certs-989166 crio[711]: time="2024-10-14 15:15:52.917431745Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918952917410265,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1339adb5-7529-4dd4-8e95-f285728d7f85 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:15:52 embed-certs-989166 crio[711]: time="2024-10-14 15:15:52.918037432Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5d45fdba-9708-40bc-9cb5-47ee74a00e89 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:15:52 embed-certs-989166 crio[711]: time="2024-10-14 15:15:52.918102506Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5d45fdba-9708-40bc-9cb5-47ee74a00e89 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:15:52 embed-certs-989166 crio[711]: time="2024-10-14 15:15:52.918317186Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fdcf89c5b91436e10c03dbc9fea768588d72a4997a958dd457c29075913fe20f,PodSandboxId:90a8fa5d83794cb27125b318c759daaecb493b4ead0cf6a8bceeab524e8bbdb7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728918401741539738,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad6caa59-bc75-4e8f-8052-86d963b92fe3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1596c06b1cc7d137360ae0461eb1800bb226dafeac56ad335816086cb1ff677,PodSandboxId:8ab2c3a2539215c9a1236476d313b2f87b1053d14c11a9cbc8b3a5cd286b2498,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728918400814926802,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6bmwg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cf9ad75-b75b-4cce-aad8-d68a810a5d0a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:881e4e8d79988127a3a53dd208777ee743be837a58404a3cc6ad00d4fbd4ce79,PodSandboxId:b2a087c3065ef67a69b7464a6da796f47042581b1fb803f3a3382a2e9492d729,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728918400860509598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-l95hj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
563de05-ef49-4fa9-bf0b-a826fbc8bb14,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ec492cd0941d666c3ab3edb5bf6b3195aa58f63059016eab320a7e64fccf2f3,PodSandboxId:b6c06b464ea07d78a2c6d0a74f164f2dafe318e618ceec87ba251c61b87c97cf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728918400063608430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g572s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d2e4a08-5d05-48ab-8fbe-3bb3fe2f77ab,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ee6b6b51cbe1c774c0c3ed13264f30b18e97a4de91dacc050b5f6f8ee5d1702,PodSandboxId:f72862ad45faa6095c364339887da6e857411344efc5129ebd87770b2c794175,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728918389132304370,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bad6ed702edb980f9ab495bd0c87ec1e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41c5829ee86da69a6477dc7cb46fb180d63ec89af4631feda9ec441fd74a9381,PodSandboxId:dc62fbba3d604cbc5300b0387a7c15263d341eb2a5c97f34f4ab28ccab3cc7d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728918389143660220,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ac42a1687ced5a6942f248383d04a7c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea51e3357f925bf10bb4111435c3aecfbc82841d01ceab7c2dc8a43cf4f11b2c,PodSandboxId:3ca4cc5b4ea3021eefda14bbaf610856eab354503e98a9fa4753bb72c36d5d68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728918389065825328,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ff6f2bfff2c52f6a606532fcbf27dc,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d04e68f07c0c2c838dc7a38787a9617099bdacc038927b4f03202edbdca0769,PodSandboxId:41897ecafacb5ac253b2dab27beb6a84d331d86052346a587246f988f21e9d57,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728918389037557389,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151746586ecdf42f597979a13a5b43e9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b8ad44dccb258b853ffd70972b32085b80373855ebcf3cba7280b4c90abdb80,PodSandboxId:65a1ca161721b36a5cd15eb9f83602d7bb104fb1f24c48ca114f43d02f79148b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728918101459157703,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151746586ecdf42f597979a13a5b43e9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5d45fdba-9708-40bc-9cb5-47ee74a00e89 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:15:52 embed-certs-989166 crio[711]: time="2024-10-14 15:15:52.961160922Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1ba209b9-23c1-4684-bd39-2a20453f7ff8 name=/runtime.v1.RuntimeService/Version
	Oct 14 15:15:52 embed-certs-989166 crio[711]: time="2024-10-14 15:15:52.961253340Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1ba209b9-23c1-4684-bd39-2a20453f7ff8 name=/runtime.v1.RuntimeService/Version
	Oct 14 15:15:52 embed-certs-989166 crio[711]: time="2024-10-14 15:15:52.962678981Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3d838941-2f7a-4d5d-95f7-7e093556be57 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:15:52 embed-certs-989166 crio[711]: time="2024-10-14 15:15:52.963159688Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918952963125236,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3d838941-2f7a-4d5d-95f7-7e093556be57 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:15:52 embed-certs-989166 crio[711]: time="2024-10-14 15:15:52.963827432Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9e2ddcbc-3ace-4c6f-8ed4-e5e14b41d483 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:15:52 embed-certs-989166 crio[711]: time="2024-10-14 15:15:52.964009256Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9e2ddcbc-3ace-4c6f-8ed4-e5e14b41d483 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:15:52 embed-certs-989166 crio[711]: time="2024-10-14 15:15:52.964231724Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fdcf89c5b91436e10c03dbc9fea768588d72a4997a958dd457c29075913fe20f,PodSandboxId:90a8fa5d83794cb27125b318c759daaecb493b4ead0cf6a8bceeab524e8bbdb7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728918401741539738,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad6caa59-bc75-4e8f-8052-86d963b92fe3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1596c06b1cc7d137360ae0461eb1800bb226dafeac56ad335816086cb1ff677,PodSandboxId:8ab2c3a2539215c9a1236476d313b2f87b1053d14c11a9cbc8b3a5cd286b2498,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728918400814926802,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6bmwg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cf9ad75-b75b-4cce-aad8-d68a810a5d0a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:881e4e8d79988127a3a53dd208777ee743be837a58404a3cc6ad00d4fbd4ce79,PodSandboxId:b2a087c3065ef67a69b7464a6da796f47042581b1fb803f3a3382a2e9492d729,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728918400860509598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-l95hj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
563de05-ef49-4fa9-bf0b-a826fbc8bb14,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ec492cd0941d666c3ab3edb5bf6b3195aa58f63059016eab320a7e64fccf2f3,PodSandboxId:b6c06b464ea07d78a2c6d0a74f164f2dafe318e618ceec87ba251c61b87c97cf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728918400063608430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g572s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d2e4a08-5d05-48ab-8fbe-3bb3fe2f77ab,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ee6b6b51cbe1c774c0c3ed13264f30b18e97a4de91dacc050b5f6f8ee5d1702,PodSandboxId:f72862ad45faa6095c364339887da6e857411344efc5129ebd87770b2c794175,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728918389132304370,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bad6ed702edb980f9ab495bd0c87ec1e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41c5829ee86da69a6477dc7cb46fb180d63ec89af4631feda9ec441fd74a9381,PodSandboxId:dc62fbba3d604cbc5300b0387a7c15263d341eb2a5c97f34f4ab28ccab3cc7d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728918389143660220,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ac42a1687ced5a6942f248383d04a7c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea51e3357f925bf10bb4111435c3aecfbc82841d01ceab7c2dc8a43cf4f11b2c,PodSandboxId:3ca4cc5b4ea3021eefda14bbaf610856eab354503e98a9fa4753bb72c36d5d68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728918389065825328,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ff6f2bfff2c52f6a606532fcbf27dc,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d04e68f07c0c2c838dc7a38787a9617099bdacc038927b4f03202edbdca0769,PodSandboxId:41897ecafacb5ac253b2dab27beb6a84d331d86052346a587246f988f21e9d57,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728918389037557389,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151746586ecdf42f597979a13a5b43e9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b8ad44dccb258b853ffd70972b32085b80373855ebcf3cba7280b4c90abdb80,PodSandboxId:65a1ca161721b36a5cd15eb9f83602d7bb104fb1f24c48ca114f43d02f79148b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728918101459157703,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151746586ecdf42f597979a13a5b43e9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9e2ddcbc-3ace-4c6f-8ed4-e5e14b41d483 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:15:53 embed-certs-989166 crio[711]: time="2024-10-14 15:15:53.003962757Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3de1f5d0-b3a8-41ed-9055-ed73d2b1afa1 name=/runtime.v1.RuntimeService/Version
	Oct 14 15:15:53 embed-certs-989166 crio[711]: time="2024-10-14 15:15:53.004048581Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3de1f5d0-b3a8-41ed-9055-ed73d2b1afa1 name=/runtime.v1.RuntimeService/Version
	Oct 14 15:15:53 embed-certs-989166 crio[711]: time="2024-10-14 15:15:53.005839684Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7a3a3da9-394b-49f8-97bf-963d00d7d8f5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:15:53 embed-certs-989166 crio[711]: time="2024-10-14 15:15:53.006303407Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918953006280223,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7a3a3da9-394b-49f8-97bf-963d00d7d8f5 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:15:53 embed-certs-989166 crio[711]: time="2024-10-14 15:15:53.006892880Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=91b47e71-21f0-4302-860f-63b2614c4efd name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:15:53 embed-certs-989166 crio[711]: time="2024-10-14 15:15:53.006948153Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=91b47e71-21f0-4302-860f-63b2614c4efd name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:15:53 embed-certs-989166 crio[711]: time="2024-10-14 15:15:53.007137887Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fdcf89c5b91436e10c03dbc9fea768588d72a4997a958dd457c29075913fe20f,PodSandboxId:90a8fa5d83794cb27125b318c759daaecb493b4ead0cf6a8bceeab524e8bbdb7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728918401741539738,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad6caa59-bc75-4e8f-8052-86d963b92fe3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1596c06b1cc7d137360ae0461eb1800bb226dafeac56ad335816086cb1ff677,PodSandboxId:8ab2c3a2539215c9a1236476d313b2f87b1053d14c11a9cbc8b3a5cd286b2498,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728918400814926802,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6bmwg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cf9ad75-b75b-4cce-aad8-d68a810a5d0a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:881e4e8d79988127a3a53dd208777ee743be837a58404a3cc6ad00d4fbd4ce79,PodSandboxId:b2a087c3065ef67a69b7464a6da796f47042581b1fb803f3a3382a2e9492d729,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728918400860509598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-l95hj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
563de05-ef49-4fa9-bf0b-a826fbc8bb14,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ec492cd0941d666c3ab3edb5bf6b3195aa58f63059016eab320a7e64fccf2f3,PodSandboxId:b6c06b464ea07d78a2c6d0a74f164f2dafe318e618ceec87ba251c61b87c97cf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728918400063608430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g572s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d2e4a08-5d05-48ab-8fbe-3bb3fe2f77ab,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ee6b6b51cbe1c774c0c3ed13264f30b18e97a4de91dacc050b5f6f8ee5d1702,PodSandboxId:f72862ad45faa6095c364339887da6e857411344efc5129ebd87770b2c794175,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728918389132304370,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bad6ed702edb980f9ab495bd0c87ec1e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41c5829ee86da69a6477dc7cb46fb180d63ec89af4631feda9ec441fd74a9381,PodSandboxId:dc62fbba3d604cbc5300b0387a7c15263d341eb2a5c97f34f4ab28ccab3cc7d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728918389143660220,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ac42a1687ced5a6942f248383d04a7c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea51e3357f925bf10bb4111435c3aecfbc82841d01ceab7c2dc8a43cf4f11b2c,PodSandboxId:3ca4cc5b4ea3021eefda14bbaf610856eab354503e98a9fa4753bb72c36d5d68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728918389065825328,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ff6f2bfff2c52f6a606532fcbf27dc,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d04e68f07c0c2c838dc7a38787a9617099bdacc038927b4f03202edbdca0769,PodSandboxId:41897ecafacb5ac253b2dab27beb6a84d331d86052346a587246f988f21e9d57,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728918389037557389,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151746586ecdf42f597979a13a5b43e9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b8ad44dccb258b853ffd70972b32085b80373855ebcf3cba7280b4c90abdb80,PodSandboxId:65a1ca161721b36a5cd15eb9f83602d7bb104fb1f24c48ca114f43d02f79148b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728918101459157703,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151746586ecdf42f597979a13a5b43e9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=91b47e71-21f0-4302-860f-63b2614c4efd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fdcf89c5b9143       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   90a8fa5d83794       storage-provisioner
	881e4e8d79988       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   b2a087c3065ef       coredns-7c65d6cfc9-l95hj
	f1596c06b1cc7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   8ab2c3a253921       coredns-7c65d6cfc9-6bmwg
	9ec492cd0941d       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago       Running             kube-proxy                0                   b6c06b464ea07       kube-proxy-g572s
	41c5829ee86da       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago       Running             kube-scheduler            2                   dc62fbba3d604       kube-scheduler-embed-certs-989166
	8ee6b6b51cbe1       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   f72862ad45faa       etcd-embed-certs-989166
	ea51e3357f925       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 minutes ago       Running             kube-controller-manager   2                   3ca4cc5b4ea30       kube-controller-manager-embed-certs-989166
	4d04e68f07c0c       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 minutes ago       Running             kube-apiserver            2                   41897ecafacb5       kube-apiserver-embed-certs-989166
	0b8ad44dccb25       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Exited              kube-apiserver            1                   65a1ca161721b       kube-apiserver-embed-certs-989166
	
	
	==> coredns [881e4e8d79988127a3a53dd208777ee743be837a58404a3cc6ad00d4fbd4ce79] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [f1596c06b1cc7d137360ae0461eb1800bb226dafeac56ad335816086cb1ff677] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-989166
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-989166
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=embed-certs-989166
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_14T15_06_35_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 15:06:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-989166
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 15:15:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 15:11:52 +0000   Mon, 14 Oct 2024 15:06:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 15:11:52 +0000   Mon, 14 Oct 2024 15:06:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 15:11:52 +0000   Mon, 14 Oct 2024 15:06:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 15:11:52 +0000   Mon, 14 Oct 2024 15:06:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.41
	  Hostname:    embed-certs-989166
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9bb0f0ffa8f04dc1b7be39d4d45995f7
	  System UUID:                9bb0f0ff-a8f0-4dc1-b7be-39d4d45995f7
	  Boot ID:                    71741bef-62d9-4a2a-8633-17b06b62bf73
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-6bmwg                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m14s
	  kube-system                 coredns-7c65d6cfc9-l95hj                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m14s
	  kube-system                 etcd-embed-certs-989166                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m19s
	  kube-system                 kube-apiserver-embed-certs-989166             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m20s
	  kube-system                 kube-controller-manager-embed-certs-989166    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m19s
	  kube-system                 kube-proxy-g572s                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	  kube-system                 kube-scheduler-embed-certs-989166             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m19s
	  kube-system                 metrics-server-6867b74b74-jl6pp               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m12s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m12s  kube-proxy       
	  Normal  Starting                 9m19s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m19s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m19s  kubelet          Node embed-certs-989166 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s  kubelet          Node embed-certs-989166 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s  kubelet          Node embed-certs-989166 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m15s  node-controller  Node embed-certs-989166 event: Registered Node embed-certs-989166 in Controller
	
	
	==> dmesg <==
	[  +0.051045] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039978] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.850488] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.479916] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.586706] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.250268] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.059214] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056814] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.169081] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.137500] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.294875] systemd-fstab-generator[702]: Ignoring "noauto" option for root device
	[  +4.134310] systemd-fstab-generator[794]: Ignoring "noauto" option for root device
	[  +2.222006] systemd-fstab-generator[916]: Ignoring "noauto" option for root device
	[  +0.058470] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.574120] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.945173] kauditd_printk_skb: 87 callbacks suppressed
	[Oct14 15:06] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.695371] systemd-fstab-generator[2539]: Ignoring "noauto" option for root device
	[  +4.628169] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.949286] systemd-fstab-generator[2861]: Ignoring "noauto" option for root device
	[  +5.507569] systemd-fstab-generator[3009]: Ignoring "noauto" option for root device
	[  +0.061353] kauditd_printk_skb: 14 callbacks suppressed
	[  +8.992893] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [8ee6b6b51cbe1c774c0c3ed13264f30b18e97a4de91dacc050b5f6f8ee5d1702] <==
	{"level":"info","ts":"2024-10-14T15:06:29.643619Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-14T15:06:29.643659Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-14T15:06:29.643668Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-14T15:06:29.643908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 switched to configuration voters=(10393760029520308295)"}
	{"level":"info","ts":"2024-10-14T15:06:29.644012Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b5cacf25c2f2940e","local-member-id":"903e0dada8362847","added-peer-id":"903e0dada8362847","added-peer-peer-urls":["https://192.168.39.41:2380"]}
	{"level":"info","ts":"2024-10-14T15:06:30.279444Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-14T15:06:30.279576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-14T15:06:30.279677Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 received MsgPreVoteResp from 903e0dada8362847 at term 1"}
	{"level":"info","ts":"2024-10-14T15:06:30.279732Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 became candidate at term 2"}
	{"level":"info","ts":"2024-10-14T15:06:30.279757Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 received MsgVoteResp from 903e0dada8362847 at term 2"}
	{"level":"info","ts":"2024-10-14T15:06:30.279822Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 became leader at term 2"}
	{"level":"info","ts":"2024-10-14T15:06:30.279847Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 903e0dada8362847 elected leader 903e0dada8362847 at term 2"}
	{"level":"info","ts":"2024-10-14T15:06:30.283367Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T15:06:30.286168Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"903e0dada8362847","local-member-attributes":"{Name:embed-certs-989166 ClientURLs:[https://192.168.39.41:2379]}","request-path":"/0/members/903e0dada8362847/attributes","cluster-id":"b5cacf25c2f2940e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-14T15:06:30.286936Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-14T15:06:30.287465Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-14T15:06:30.288690Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-14T15:06:30.288841Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-14T15:06:30.290943Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-14T15:06:30.291418Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-14T15:06:30.294260Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-14T15:06:30.297191Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.41:2379"}
	{"level":"info","ts":"2024-10-14T15:06:30.297594Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b5cacf25c2f2940e","local-member-id":"903e0dada8362847","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T15:06:30.297697Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T15:06:30.297748Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 15:15:53 up 14 min,  0 users,  load average: 0.18, 0.15, 0.13
	Linux embed-certs-989166 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0b8ad44dccb258b853ffd70972b32085b80373855ebcf3cba7280b4c90abdb80] <==
	W1014 15:06:21.452329       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:21.452445       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:21.459210       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:21.628230       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:21.640036       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:21.664313       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:21.669747       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:21.729339       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:21.763175       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:21.777747       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:21.839757       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:21.893159       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:21.916629       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:21.992741       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:22.090553       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:22.123522       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:22.168400       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:22.443446       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:22.451096       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:22.558556       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:23.674507       1 logging.go:55] [core] [Channel #208 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:24.675488       1 logging.go:55] [core] [Channel #208 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:25.672322       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:25.727618       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:26.014260       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [4d04e68f07c0c2c838dc7a38787a9617099bdacc038927b4f03202edbdca0769] <==
	W1014 15:11:32.903336       1 handler_proxy.go:99] no RequestInfo found in the context
	E1014 15:11:32.903417       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1014 15:11:32.904386       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1014 15:11:32.904457       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1014 15:12:32.905025       1 handler_proxy.go:99] no RequestInfo found in the context
	W1014 15:12:32.905051       1 handler_proxy.go:99] no RequestInfo found in the context
	E1014 15:12:32.905296       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1014 15:12:32.905437       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1014 15:12:32.906554       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1014 15:12:32.906659       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1014 15:14:32.907713       1 handler_proxy.go:99] no RequestInfo found in the context
	E1014 15:14:32.907805       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1014 15:14:32.907752       1 handler_proxy.go:99] no RequestInfo found in the context
	E1014 15:14:32.907948       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1014 15:14:32.909270       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1014 15:14:32.909327       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [ea51e3357f925bf10bb4111435c3aecfbc82841d01ceab7c2dc8a43cf4f11b2c] <==
	E1014 15:10:38.862930       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:10:39.297516       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:11:08.869586       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:11:09.305953       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:11:38.876496       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:11:39.313519       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1014 15:11:52.069323       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-989166"
	E1014 15:12:08.882728       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:12:09.320777       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:12:38.890209       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:12:39.328556       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1014 15:12:46.790697       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="210.638µs"
	I1014 15:13:00.794419       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="95.18µs"
	E1014 15:13:08.897430       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:13:09.336948       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:13:38.904530       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:13:39.344749       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:14:08.911342       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:14:09.351795       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:14:38.919528       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:14:39.360261       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:15:08.926348       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:15:09.369349       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:15:38.932990       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:15:39.376682       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [9ec492cd0941d666c3ab3edb5bf6b3195aa58f63059016eab320a7e64fccf2f3] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1014 15:06:40.427552       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1014 15:06:40.445814       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.41"]
	E1014 15:06:40.447326       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 15:06:40.536300       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1014 15:06:40.536366       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 15:06:40.536397       1 server_linux.go:169] "Using iptables Proxier"
	I1014 15:06:40.584301       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 15:06:40.584560       1 server.go:483] "Version info" version="v1.31.1"
	I1014 15:06:40.584588       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 15:06:40.588024       1 config.go:199] "Starting service config controller"
	I1014 15:06:40.588118       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 15:06:40.588169       1 config.go:105] "Starting endpoint slice config controller"
	I1014 15:06:40.588174       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 15:06:40.589004       1 config.go:328] "Starting node config controller"
	I1014 15:06:40.589031       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 15:06:40.688587       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1014 15:06:40.688671       1 shared_informer.go:320] Caches are synced for service config
	I1014 15:06:40.690750       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [41c5829ee86da69a6477dc7cb46fb180d63ec89af4631feda9ec441fd74a9381] <==
	W1014 15:06:31.922282       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1014 15:06:31.922320       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 15:06:31.922288       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1014 15:06:31.922506       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 15:06:32.777640       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1014 15:06:32.777753       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1014 15:06:32.876928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1014 15:06:32.877245       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 15:06:32.952341       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1014 15:06:32.952936       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 15:06:33.064691       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1014 15:06:33.065104       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1014 15:06:33.101757       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1014 15:06:33.101977       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 15:06:33.101774       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1014 15:06:33.102066       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 15:06:33.104018       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1014 15:06:33.104079       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1014 15:06:33.124661       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1014 15:06:33.124938       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 15:06:33.129162       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1014 15:06:33.129264       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1014 15:06:33.163682       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1014 15:06:33.163794       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 15:06:36.113809       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 14 15:14:37 embed-certs-989166 kubelet[2868]: E1014 15:14:37.773731    2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jl6pp" podUID="c244e53d-c492-426a-be7f-d405f2defd17"
	Oct 14 15:14:44 embed-certs-989166 kubelet[2868]: E1014 15:14:44.924765    2868 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918884924087764,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:14:44 embed-certs-989166 kubelet[2868]: E1014 15:14:44.925123    2868 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918884924087764,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:14:52 embed-certs-989166 kubelet[2868]: E1014 15:14:52.773455    2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jl6pp" podUID="c244e53d-c492-426a-be7f-d405f2defd17"
	Oct 14 15:14:54 embed-certs-989166 kubelet[2868]: E1014 15:14:54.926657    2868 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918894926389563,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:14:54 embed-certs-989166 kubelet[2868]: E1014 15:14:54.926684    2868 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918894926389563,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:15:04 embed-certs-989166 kubelet[2868]: E1014 15:15:04.929797    2868 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918904929422960,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:15:04 embed-certs-989166 kubelet[2868]: E1014 15:15:04.929917    2868 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918904929422960,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:15:05 embed-certs-989166 kubelet[2868]: E1014 15:15:05.773446    2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jl6pp" podUID="c244e53d-c492-426a-be7f-d405f2defd17"
	Oct 14 15:15:14 embed-certs-989166 kubelet[2868]: E1014 15:15:14.932641    2868 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918914932060347,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:15:14 embed-certs-989166 kubelet[2868]: E1014 15:15:14.932685    2868 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918914932060347,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:15:16 embed-certs-989166 kubelet[2868]: E1014 15:15:16.772804    2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jl6pp" podUID="c244e53d-c492-426a-be7f-d405f2defd17"
	Oct 14 15:15:24 embed-certs-989166 kubelet[2868]: E1014 15:15:24.934604    2868 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918924934329419,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:15:24 embed-certs-989166 kubelet[2868]: E1014 15:15:24.934955    2868 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918924934329419,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:15:30 embed-certs-989166 kubelet[2868]: E1014 15:15:30.774630    2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jl6pp" podUID="c244e53d-c492-426a-be7f-d405f2defd17"
	Oct 14 15:15:34 embed-certs-989166 kubelet[2868]: E1014 15:15:34.817002    2868 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 15:15:34 embed-certs-989166 kubelet[2868]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 15:15:34 embed-certs-989166 kubelet[2868]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 15:15:34 embed-certs-989166 kubelet[2868]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 15:15:34 embed-certs-989166 kubelet[2868]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 15:15:34 embed-certs-989166 kubelet[2868]: E1014 15:15:34.937532    2868 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918934936960621,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:15:34 embed-certs-989166 kubelet[2868]: E1014 15:15:34.937553    2868 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918934936960621,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:15:44 embed-certs-989166 kubelet[2868]: E1014 15:15:44.940989    2868 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918944939983907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:15:44 embed-certs-989166 kubelet[2868]: E1014 15:15:44.941047    2868 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918944939983907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:15:45 embed-certs-989166 kubelet[2868]: E1014 15:15:45.773375    2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jl6pp" podUID="c244e53d-c492-426a-be7f-d405f2defd17"
	
	
	==> storage-provisioner [fdcf89c5b91436e10c03dbc9fea768588d72a4997a958dd457c29075913fe20f] <==
	I1014 15:06:41.859434       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1014 15:06:41.873373       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1014 15:06:41.873520       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1014 15:06:41.888094       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1014 15:06:41.888275       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-989166_0a3d1888-9541-478e-b17d-819ae5260e2d!
	I1014 15:06:41.889297       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"da1f4c41-9bb1-4afd-8cbf-fa16c3cfabf6", APIVersion:"v1", ResourceVersion:"393", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-989166_0a3d1888-9541-478e-b17d-819ae5260e2d became leader
	I1014 15:06:41.995474       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-989166_0a3d1888-9541-478e-b17d-819ae5260e2d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-989166 -n embed-certs-989166
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-989166 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-jl6pp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-989166 describe pod metrics-server-6867b74b74-jl6pp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-989166 describe pod metrics-server-6867b74b74-jl6pp: exit status 1 (63.496586ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-jl6pp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-989166 describe pod metrics-server-6867b74b74-jl6pp: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1014 15:08:36.994735   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/functional-917108/client.crt: no such file or directory" logger="UnhandledError"
E1014 15:09:27.794446   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/auto-517678/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-813300 -n no-preload-813300
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-10-14 15:17:01.762867538 +0000 UTC m=+5906.044215879
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-813300 -n no-preload-813300
E1014 15:17:01.835383   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/enable-default-cni-517678/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-813300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-813300 logs -n 25: (2.134218282s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-517678 sudo cat                              | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-517678 sudo                                  | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-517678 sudo                                  | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-517678 sudo                                  | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-517678 sudo find                             | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-517678 sudo crio                             | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-517678                                       | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	| delete  | -p                                                     | disable-driver-mounts-887610 | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | disable-driver-mounts-887610                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:55 UTC |
	|         | default-k8s-diff-port-201291                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-813300             | no-preload-813300            | jenkins | v1.34.0 | 14 Oct 24 14:54 UTC | 14 Oct 24 14:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-813300                                   | no-preload-813300            | jenkins | v1.34.0 | 14 Oct 24 14:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-989166            | embed-certs-989166           | jenkins | v1.34.0 | 14 Oct 24 14:54 UTC | 14 Oct 24 14:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-989166                                  | embed-certs-989166           | jenkins | v1.34.0 | 14 Oct 24 14:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-201291  | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:55 UTC | 14 Oct 24 14:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:55 UTC |                     |
	|         | default-k8s-diff-port-201291                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-813300                  | no-preload-813300            | jenkins | v1.34.0 | 14 Oct 24 14:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-813300                                   | no-preload-813300            | jenkins | v1.34.0 | 14 Oct 24 14:56 UTC | 14 Oct 24 15:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-399767        | old-k8s-version-399767       | jenkins | v1.34.0 | 14 Oct 24 14:56 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-989166                 | embed-certs-989166           | jenkins | v1.34.0 | 14 Oct 24 14:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-989166                                  | embed-certs-989166           | jenkins | v1.34.0 | 14 Oct 24 14:57 UTC | 14 Oct 24 15:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-201291       | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:57 UTC | 14 Oct 24 15:06 UTC |
	|         | default-k8s-diff-port-201291                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-399767                              | old-k8s-version-399767       | jenkins | v1.34.0 | 14 Oct 24 14:58 UTC | 14 Oct 24 14:58 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-399767             | old-k8s-version-399767       | jenkins | v1.34.0 | 14 Oct 24 14:58 UTC | 14 Oct 24 14:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-399767                              | old-k8s-version-399767       | jenkins | v1.34.0 | 14 Oct 24 14:58 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 14:58:18
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 14:58:18.000027   72639 out.go:345] Setting OutFile to fd 1 ...
	I1014 14:58:18.000165   72639 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:58:18.000176   72639 out.go:358] Setting ErrFile to fd 2...
	I1014 14:58:18.000189   72639 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:58:18.000390   72639 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
	I1014 14:58:18.000911   72639 out.go:352] Setting JSON to false
	I1014 14:58:18.001828   72639 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6048,"bootTime":1728911850,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 14:58:18.001919   72639 start.go:139] virtualization: kvm guest
	I1014 14:58:18.004056   72639 out.go:177] * [old-k8s-version-399767] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1014 14:58:18.005382   72639 notify.go:220] Checking for updates...
	I1014 14:58:18.005437   72639 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 14:58:18.006939   72639 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 14:58:18.008275   72639 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 14:58:18.009565   72639 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 14:58:18.010773   72639 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 14:58:18.011941   72639 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 14:58:18.013472   72639 config.go:182] Loaded profile config "old-k8s-version-399767": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1014 14:58:18.013833   72639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:58:18.013892   72639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:58:18.028372   72639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44497
	I1014 14:58:18.028786   72639 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:58:18.029355   72639 main.go:141] libmachine: Using API Version  1
	I1014 14:58:18.029375   72639 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:58:18.029671   72639 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:58:18.029827   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 14:58:18.031644   72639 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1014 14:58:18.033229   72639 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 14:58:18.033524   72639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:58:18.033565   72639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:58:18.048210   72639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34273
	I1014 14:58:18.048620   72639 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:58:18.049080   72639 main.go:141] libmachine: Using API Version  1
	I1014 14:58:18.049102   72639 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:58:18.049377   72639 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:58:18.049550   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 14:58:18.084664   72639 out.go:177] * Using the kvm2 driver based on existing profile
	I1014 14:58:18.085942   72639 start.go:297] selected driver: kvm2
	I1014 14:58:18.085952   72639 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-399767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-399767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.138 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 14:58:18.086042   72639 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 14:58:18.086707   72639 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 14:58:18.086795   72639 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19790-7836/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 14:58:18.101802   72639 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1014 14:58:18.102194   72639 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 14:58:18.102224   72639 cni.go:84] Creating CNI manager for ""
	I1014 14:58:18.102263   72639 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 14:58:18.102315   72639 start.go:340] cluster config:
	{Name:old-k8s-version-399767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-399767 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.138 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 14:58:18.102441   72639 iso.go:125] acquiring lock: {Name:mk2e2e780b05ead4007a93e6b56d28c6081926bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 14:58:18.105418   72639 out.go:177] * Starting "old-k8s-version-399767" primary control-plane node in "old-k8s-version-399767" cluster
	I1014 14:58:16.182868   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:18.106656   72639 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1014 14:58:18.106696   72639 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1014 14:58:18.106708   72639 cache.go:56] Caching tarball of preloaded images
	I1014 14:58:18.106790   72639 preload.go:172] Found /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 14:58:18.106800   72639 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1014 14:58:18.106889   72639 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/config.json ...
	I1014 14:58:18.107063   72639 start.go:360] acquireMachinesLock for old-k8s-version-399767: {Name:mk0b928e3a7c606d1470b2064477a22c5e59969d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 14:58:22.262902   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:25.334877   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:31.414867   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:34.486863   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:40.566883   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:43.638929   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:49.718856   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:52.790946   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:58.870883   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:01.942844   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:08.022831   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:11.094893   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:17.174897   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:20.246818   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:26.326911   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:29.398852   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:35.478877   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:38.550829   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:44.630857   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:47.702856   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:53.782842   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:56.854890   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:02.934894   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:06.006879   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:12.086905   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:15.158856   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:21.238905   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:24.310889   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:30.390878   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:33.462909   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:39.542866   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:42.614929   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:48.694859   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:51.766865   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:57.846913   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:01:00.918859   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:01:06.998892   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:01:10.070810   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:01:13.075950   72173 start.go:364] duration metric: took 3m43.687804446s to acquireMachinesLock for "embed-certs-989166"
	I1014 15:01:13.076005   72173 start.go:96] Skipping create...Using existing machine configuration
	I1014 15:01:13.076011   72173 fix.go:54] fixHost starting: 
	I1014 15:01:13.076341   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:01:13.076386   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:01:13.092168   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41771
	I1014 15:01:13.092686   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:01:13.093180   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:01:13.093204   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:01:13.093560   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:01:13.093749   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:13.093889   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetState
	I1014 15:01:13.095639   72173 fix.go:112] recreateIfNeeded on embed-certs-989166: state=Stopped err=<nil>
	I1014 15:01:13.095665   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	W1014 15:01:13.095827   72173 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 15:01:13.097909   72173 out.go:177] * Restarting existing kvm2 VM for "embed-certs-989166" ...
	I1014 15:01:13.099253   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Start
	I1014 15:01:13.099433   72173 main.go:141] libmachine: (embed-certs-989166) Ensuring networks are active...
	I1014 15:01:13.100328   72173 main.go:141] libmachine: (embed-certs-989166) Ensuring network default is active
	I1014 15:01:13.100683   72173 main.go:141] libmachine: (embed-certs-989166) Ensuring network mk-embed-certs-989166 is active
	I1014 15:01:13.101062   72173 main.go:141] libmachine: (embed-certs-989166) Getting domain xml...
	I1014 15:01:13.101867   72173 main.go:141] libmachine: (embed-certs-989166) Creating domain...
	I1014 15:01:13.073323   71679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 15:01:13.073356   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetMachineName
	I1014 15:01:13.073658   71679 buildroot.go:166] provisioning hostname "no-preload-813300"
	I1014 15:01:13.073682   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetMachineName
	I1014 15:01:13.073854   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:01:13.075825   71679 machine.go:96] duration metric: took 4m37.425006s to provisionDockerMachine
	I1014 15:01:13.075866   71679 fix.go:56] duration metric: took 4m37.446829923s for fixHost
	I1014 15:01:13.075872   71679 start.go:83] releasing machines lock for "no-preload-813300", held for 4m37.446848059s
	W1014 15:01:13.075889   71679 start.go:714] error starting host: provision: host is not running
	W1014 15:01:13.075983   71679 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1014 15:01:13.075992   71679 start.go:729] Will try again in 5 seconds ...
	I1014 15:01:14.319338   72173 main.go:141] libmachine: (embed-certs-989166) Waiting to get IP...
	I1014 15:01:14.320167   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:14.320582   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:14.320651   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:14.320577   73268 retry.go:31] will retry after 213.073722ms: waiting for machine to come up
	I1014 15:01:14.534913   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:14.535353   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:14.535375   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:14.535306   73268 retry.go:31] will retry after 316.205029ms: waiting for machine to come up
	I1014 15:01:14.852769   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:14.853201   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:14.853261   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:14.853201   73268 retry.go:31] will retry after 399.414867ms: waiting for machine to come up
	I1014 15:01:15.253657   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:15.253955   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:15.253979   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:15.253917   73268 retry.go:31] will retry after 537.097034ms: waiting for machine to come up
	I1014 15:01:15.792362   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:15.792736   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:15.792763   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:15.792703   73268 retry.go:31] will retry after 594.582114ms: waiting for machine to come up
	I1014 15:01:16.388419   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:16.388838   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:16.388869   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:16.388793   73268 retry.go:31] will retry after 814.814512ms: waiting for machine to come up
	I1014 15:01:17.204791   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:17.205229   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:17.205255   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:17.205176   73268 retry.go:31] will retry after 971.673961ms: waiting for machine to come up
	I1014 15:01:18.178701   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:18.179100   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:18.179130   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:18.179048   73268 retry.go:31] will retry after 941.576822ms: waiting for machine to come up
	I1014 15:01:19.122097   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:19.122488   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:19.122514   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:19.122453   73268 retry.go:31] will retry after 1.5308999s: waiting for machine to come up
	I1014 15:01:18.077601   71679 start.go:360] acquireMachinesLock for no-preload-813300: {Name:mk0b928e3a7c606d1470b2064477a22c5e59969d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 15:01:20.655098   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:20.655524   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:20.655550   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:20.655475   73268 retry.go:31] will retry after 1.590510545s: waiting for machine to come up
	I1014 15:01:22.248128   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:22.248551   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:22.248572   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:22.248511   73268 retry.go:31] will retry after 1.965898839s: waiting for machine to come up
	I1014 15:01:24.215742   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:24.216187   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:24.216240   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:24.216136   73268 retry.go:31] will retry after 3.476459931s: waiting for machine to come up
	I1014 15:01:27.696804   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:27.697201   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:27.697254   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:27.697175   73268 retry.go:31] will retry after 3.212757582s: waiting for machine to come up
	I1014 15:01:32.235659   72390 start.go:364] duration metric: took 3m35.715993521s to acquireMachinesLock for "default-k8s-diff-port-201291"
	I1014 15:01:32.235710   72390 start.go:96] Skipping create...Using existing machine configuration
	I1014 15:01:32.235718   72390 fix.go:54] fixHost starting: 
	I1014 15:01:32.236084   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:01:32.236134   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:01:32.253294   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46045
	I1014 15:01:32.253760   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:01:32.254255   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:01:32.254275   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:01:32.254616   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:01:32.254797   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:32.254973   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetState
	I1014 15:01:32.256494   72390 fix.go:112] recreateIfNeeded on default-k8s-diff-port-201291: state=Stopped err=<nil>
	I1014 15:01:32.256523   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	W1014 15:01:32.256683   72390 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 15:01:32.258989   72390 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-201291" ...
	I1014 15:01:30.911781   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:30.912283   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has current primary IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:30.912313   72173 main.go:141] libmachine: (embed-certs-989166) Found IP for machine: 192.168.39.41
	I1014 15:01:30.912331   72173 main.go:141] libmachine: (embed-certs-989166) Reserving static IP address...
	I1014 15:01:30.912771   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "embed-certs-989166", mac: "52:54:00:ee:96:19", ip: "192.168.39.41"} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:30.912799   72173 main.go:141] libmachine: (embed-certs-989166) DBG | skip adding static IP to network mk-embed-certs-989166 - found existing host DHCP lease matching {name: "embed-certs-989166", mac: "52:54:00:ee:96:19", ip: "192.168.39.41"}
	I1014 15:01:30.912806   72173 main.go:141] libmachine: (embed-certs-989166) Reserved static IP address: 192.168.39.41
	I1014 15:01:30.912815   72173 main.go:141] libmachine: (embed-certs-989166) Waiting for SSH to be available...
	I1014 15:01:30.912822   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Getting to WaitForSSH function...
	I1014 15:01:30.914919   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:30.915273   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:30.915310   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:30.915392   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Using SSH client type: external
	I1014 15:01:30.915414   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa (-rw-------)
	I1014 15:01:30.915465   72173 main.go:141] libmachine: (embed-certs-989166) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.41 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 15:01:30.915489   72173 main.go:141] libmachine: (embed-certs-989166) DBG | About to run SSH command:
	I1014 15:01:30.915503   72173 main.go:141] libmachine: (embed-certs-989166) DBG | exit 0
	I1014 15:01:31.042620   72173 main.go:141] libmachine: (embed-certs-989166) DBG | SSH cmd err, output: <nil>: 
	I1014 15:01:31.043061   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetConfigRaw
	I1014 15:01:31.043675   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetIP
	I1014 15:01:31.046338   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.046679   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.046720   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.046941   72173 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/config.json ...
	I1014 15:01:31.047132   72173 machine.go:93] provisionDockerMachine start ...
	I1014 15:01:31.047149   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:31.047348   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.049453   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.049835   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.049857   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.049978   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:31.050147   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.050282   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.050419   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:31.050573   72173 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:31.050814   72173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1014 15:01:31.050828   72173 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 15:01:31.163270   72173 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 15:01:31.163306   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetMachineName
	I1014 15:01:31.163614   72173 buildroot.go:166] provisioning hostname "embed-certs-989166"
	I1014 15:01:31.163644   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetMachineName
	I1014 15:01:31.163821   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.166684   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.167009   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.167040   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.167157   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:31.167416   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.167582   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.167718   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:31.167857   72173 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:31.168025   72173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1014 15:01:31.168040   72173 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-989166 && echo "embed-certs-989166" | sudo tee /etc/hostname
	I1014 15:01:31.292369   72173 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-989166
	
	I1014 15:01:31.292405   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.295057   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.295425   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.295449   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.295713   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:31.295915   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.296076   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.296220   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:31.296395   72173 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:31.296552   72173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1014 15:01:31.296567   72173 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-989166' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-989166/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-989166' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 15:01:31.411080   72173 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 15:01:31.411112   72173 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 15:01:31.411131   72173 buildroot.go:174] setting up certificates
	I1014 15:01:31.411142   72173 provision.go:84] configureAuth start
	I1014 15:01:31.411150   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetMachineName
	I1014 15:01:31.411396   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetIP
	I1014 15:01:31.413972   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.414319   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.414341   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.414502   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.416775   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.417092   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.417113   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.417278   72173 provision.go:143] copyHostCerts
	I1014 15:01:31.417340   72173 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 15:01:31.417353   72173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 15:01:31.417437   72173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 15:01:31.417549   72173 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 15:01:31.417559   72173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 15:01:31.417600   72173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 15:01:31.417677   72173 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 15:01:31.417687   72173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 15:01:31.417721   72173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 15:01:31.417788   72173 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.embed-certs-989166 san=[127.0.0.1 192.168.39.41 embed-certs-989166 localhost minikube]
	I1014 15:01:31.599973   72173 provision.go:177] copyRemoteCerts
	I1014 15:01:31.600034   72173 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 15:01:31.600060   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.602964   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.603270   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.603296   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.603502   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:31.603665   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.603821   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:31.603949   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:01:31.688890   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 15:01:31.713474   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1014 15:01:31.737692   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 15:01:31.760955   72173 provision.go:87] duration metric: took 349.799595ms to configureAuth
	I1014 15:01:31.760986   72173 buildroot.go:189] setting minikube options for container-runtime
	I1014 15:01:31.761172   72173 config.go:182] Loaded profile config "embed-certs-989166": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 15:01:31.761244   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.763800   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.764149   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.764181   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.764339   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:31.764494   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.764636   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.764732   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:31.764852   72173 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:31.765002   72173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1014 15:01:31.765016   72173 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 15:01:31.992783   72173 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 15:01:31.992811   72173 machine.go:96] duration metric: took 945.667058ms to provisionDockerMachine
	I1014 15:01:31.992823   72173 start.go:293] postStartSetup for "embed-certs-989166" (driver="kvm2")
	I1014 15:01:31.992834   72173 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 15:01:31.992848   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:31.993203   72173 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 15:01:31.993235   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.995966   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.996380   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.996418   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.996538   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:31.996714   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.996864   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:31.997003   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:01:32.081931   72173 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 15:01:32.086191   72173 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 15:01:32.086218   72173 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 15:01:32.086287   72173 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 15:01:32.086368   72173 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 15:01:32.086455   72173 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 15:01:32.096414   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:01:32.120348   72173 start.go:296] duration metric: took 127.509685ms for postStartSetup
	I1014 15:01:32.120392   72173 fix.go:56] duration metric: took 19.044380323s for fixHost
	I1014 15:01:32.120412   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:32.123024   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.123435   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:32.123465   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.123649   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:32.123832   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:32.123986   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:32.124152   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:32.124288   72173 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:32.124487   72173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1014 15:01:32.124502   72173 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 15:01:32.235487   72173 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728918092.208431219
	
	I1014 15:01:32.235513   72173 fix.go:216] guest clock: 1728918092.208431219
	I1014 15:01:32.235522   72173 fix.go:229] Guest: 2024-10-14 15:01:32.208431219 +0000 UTC Remote: 2024-10-14 15:01:32.12039587 +0000 UTC m=+242.874215269 (delta=88.035349ms)
	I1014 15:01:32.235559   72173 fix.go:200] guest clock delta is within tolerance: 88.035349ms
	I1014 15:01:32.235572   72173 start.go:83] releasing machines lock for "embed-certs-989166", held for 19.159587089s
	I1014 15:01:32.235600   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:32.235877   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetIP
	I1014 15:01:32.238642   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.238995   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:32.239025   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.239175   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:32.239719   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:32.239891   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:32.239978   72173 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 15:01:32.240031   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:32.240091   72173 ssh_runner.go:195] Run: cat /version.json
	I1014 15:01:32.240115   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:32.242742   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.243102   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:32.243128   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.243177   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.243275   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:32.243482   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:32.243653   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:32.243664   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:32.243676   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.243811   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:32.243822   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:01:32.243929   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:32.244050   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:32.244168   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:01:32.357542   72173 ssh_runner.go:195] Run: systemctl --version
	I1014 15:01:32.365113   72173 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 15:01:32.510557   72173 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 15:01:32.516545   72173 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 15:01:32.516628   72173 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 15:01:32.533449   72173 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 15:01:32.533473   72173 start.go:495] detecting cgroup driver to use...
	I1014 15:01:32.533549   72173 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 15:01:32.549503   72173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 15:01:32.563126   72173 docker.go:217] disabling cri-docker service (if available) ...
	I1014 15:01:32.563184   72173 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 15:01:32.576972   72173 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 15:01:32.591047   72173 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 15:01:32.704839   72173 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 15:01:32.844770   72173 docker.go:233] disabling docker service ...
	I1014 15:01:32.844855   72173 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 15:01:32.859524   72173 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 15:01:32.872297   72173 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 15:01:33.014291   72173 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 15:01:33.136889   72173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 15:01:33.151656   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 15:01:33.170504   72173 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 15:01:33.170575   72173 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.180894   72173 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 15:01:33.180968   72173 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.192268   72173 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.203509   72173 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.215958   72173 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 15:01:33.227981   72173 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.241615   72173 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.261168   72173 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.273098   72173 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 15:01:33.284050   72173 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 15:01:33.284225   72173 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 15:01:33.299547   72173 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 15:01:33.310259   72173 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:01:33.426563   72173 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 15:01:33.526759   72173 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 15:01:33.526817   72173 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 15:01:33.532297   72173 start.go:563] Will wait 60s for crictl version
	I1014 15:01:33.532356   72173 ssh_runner.go:195] Run: which crictl
	I1014 15:01:33.536385   72173 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 15:01:33.576222   72173 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 15:01:33.576305   72173 ssh_runner.go:195] Run: crio --version
	I1014 15:01:33.604603   72173 ssh_runner.go:195] Run: crio --version
	I1014 15:01:33.636261   72173 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1014 15:01:33.637497   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetIP
	I1014 15:01:33.640450   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:33.640768   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:33.640806   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:33.641001   72173 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1014 15:01:33.645241   72173 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:01:33.658028   72173 kubeadm.go:883] updating cluster {Name:embed-certs-989166 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-989166 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 15:01:33.658181   72173 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 15:01:33.658246   72173 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:01:33.695188   72173 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1014 15:01:33.695261   72173 ssh_runner.go:195] Run: which lz4
	I1014 15:01:33.699735   72173 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 15:01:33.704540   72173 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 15:01:33.704576   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1014 15:01:32.260401   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Start
	I1014 15:01:32.260569   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Ensuring networks are active...
	I1014 15:01:32.261176   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Ensuring network default is active
	I1014 15:01:32.261498   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Ensuring network mk-default-k8s-diff-port-201291 is active
	I1014 15:01:32.261795   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Getting domain xml...
	I1014 15:01:32.262414   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Creating domain...
	I1014 15:01:33.520115   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting to get IP...
	I1014 15:01:33.521127   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:33.521518   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:33.521609   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:33.521520   73405 retry.go:31] will retry after 278.409801ms: waiting for machine to come up
	I1014 15:01:33.802289   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:33.802720   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:33.802744   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:33.802688   73405 retry.go:31] will retry after 362.923826ms: waiting for machine to come up
	I1014 15:01:34.167836   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:34.168228   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:34.168273   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:34.168163   73405 retry.go:31] will retry after 315.156371ms: waiting for machine to come up
	I1014 15:01:34.485445   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:34.485855   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:34.485876   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:34.485840   73405 retry.go:31] will retry after 573.46626ms: waiting for machine to come up
	I1014 15:01:35.061472   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:35.061997   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:35.062027   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:35.061965   73405 retry.go:31] will retry after 519.420022ms: waiting for machine to come up
	I1014 15:01:35.582694   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:35.583130   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:35.583155   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:35.583062   73405 retry.go:31] will retry after 661.055324ms: waiting for machine to come up
	I1014 15:01:36.245525   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:36.245876   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:36.245902   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:36.245834   73405 retry.go:31] will retry after 870.411428ms: waiting for machine to come up
	I1014 15:01:35.120269   72173 crio.go:462] duration metric: took 1.42058504s to copy over tarball
	I1014 15:01:35.120372   72173 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 15:01:37.206126   72173 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.08572724s)
	I1014 15:01:37.206168   72173 crio.go:469] duration metric: took 2.085859852s to extract the tarball
	I1014 15:01:37.206176   72173 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 15:01:37.243007   72173 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:01:37.289639   72173 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 15:01:37.289667   72173 cache_images.go:84] Images are preloaded, skipping loading
	I1014 15:01:37.289678   72173 kubeadm.go:934] updating node { 192.168.39.41 8443 v1.31.1 crio true true} ...
	I1014 15:01:37.289793   72173 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-989166 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.41
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-989166 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 15:01:37.289878   72173 ssh_runner.go:195] Run: crio config
	I1014 15:01:37.348641   72173 cni.go:84] Creating CNI manager for ""
	I1014 15:01:37.348665   72173 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:01:37.348684   72173 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 15:01:37.348711   72173 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.41 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-989166 NodeName:embed-certs-989166 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.41"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.41 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 15:01:37.348861   72173 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.41
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-989166"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.41"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.41"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 15:01:37.348925   72173 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 15:01:37.359204   72173 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 15:01:37.359272   72173 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 15:01:37.368810   72173 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1014 15:01:37.385402   72173 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 15:01:37.401828   72173 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1014 15:01:37.418811   72173 ssh_runner.go:195] Run: grep 192.168.39.41	control-plane.minikube.internal$ /etc/hosts
	I1014 15:01:37.422655   72173 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.41	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:01:37.434567   72173 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:01:37.561408   72173 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:01:37.579549   72173 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166 for IP: 192.168.39.41
	I1014 15:01:37.579577   72173 certs.go:194] generating shared ca certs ...
	I1014 15:01:37.579596   72173 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:01:37.579766   72173 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 15:01:37.579878   72173 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 15:01:37.579894   72173 certs.go:256] generating profile certs ...
	I1014 15:01:37.579998   72173 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/client.key
	I1014 15:01:37.580079   72173 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/apiserver.key.8939f8c2
	I1014 15:01:37.580148   72173 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/proxy-client.key
	I1014 15:01:37.580316   72173 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 15:01:37.580364   72173 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 15:01:37.580376   72173 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 15:01:37.580413   72173 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 15:01:37.580445   72173 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 15:01:37.580482   72173 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 15:01:37.580536   72173 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:01:37.581259   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 15:01:37.632130   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 15:01:37.678608   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 15:01:37.705377   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 15:01:37.731897   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1014 15:01:37.775043   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 15:01:37.801653   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 15:01:37.826547   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 15:01:37.852086   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 15:01:37.878715   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 15:01:37.905883   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 15:01:37.932458   72173 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 15:01:37.951362   72173 ssh_runner.go:195] Run: openssl version
	I1014 15:01:37.957730   72173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 15:01:37.969936   72173 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:01:37.974871   72173 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:01:37.974931   72173 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:01:37.981060   72173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 15:01:37.992086   72173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 15:01:38.003528   72173 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 15:01:38.008267   72173 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 15:01:38.008332   72173 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 15:01:38.014243   72173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 15:01:38.025272   72173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 15:01:38.036191   72173 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 15:01:38.040751   72173 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 15:01:38.040804   72173 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 15:01:38.046605   72173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 15:01:38.057815   72173 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 15:01:38.062497   72173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 15:01:38.068889   72173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 15:01:38.075278   72173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 15:01:38.081663   72173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 15:01:38.087892   72173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 15:01:38.093748   72173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 15:01:38.099807   72173 kubeadm.go:392] StartCluster: {Name:embed-certs-989166 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-989166 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 15:01:38.099912   72173 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 15:01:38.099960   72173 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:01:38.140896   72173 cri.go:89] found id: ""
	I1014 15:01:38.140973   72173 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 15:01:38.151443   72173 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1014 15:01:38.151462   72173 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1014 15:01:38.151512   72173 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 15:01:38.161419   72173 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 15:01:38.162357   72173 kubeconfig.go:125] found "embed-certs-989166" server: "https://192.168.39.41:8443"
	I1014 15:01:38.164328   72173 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 15:01:38.174731   72173 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.41
	I1014 15:01:38.174767   72173 kubeadm.go:1160] stopping kube-system containers ...
	I1014 15:01:38.174782   72173 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1014 15:01:38.174849   72173 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:01:38.214903   72173 cri.go:89] found id: ""
	I1014 15:01:38.214982   72173 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1014 15:01:38.232891   72173 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:01:38.242711   72173 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:01:38.242735   72173 kubeadm.go:157] found existing configuration files:
	
	I1014 15:01:38.242793   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:01:38.251939   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:01:38.252019   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:01:38.262108   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:01:38.271688   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:01:38.271751   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:01:38.281420   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:01:38.290693   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:01:38.290752   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:01:38.300205   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:01:38.309174   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:01:38.309236   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:01:38.318616   72173 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:01:38.328337   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:38.436297   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:37.118307   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:37.118744   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:37.118784   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:37.118706   73405 retry.go:31] will retry after 1.481454557s: waiting for machine to come up
	I1014 15:01:38.601780   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:38.602168   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:38.602212   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:38.602118   73405 retry.go:31] will retry after 1.22705177s: waiting for machine to come up
	I1014 15:01:39.831413   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:39.831889   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:39.831963   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:39.831838   73405 retry.go:31] will retry after 1.898722681s: waiting for machine to come up
	I1014 15:01:39.574410   72173 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.138075676s)
	I1014 15:01:39.574444   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:39.789417   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:39.873563   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:40.011579   72173 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:01:40.011673   72173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:01:40.511877   72173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:01:41.012608   72173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:01:41.512235   72173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:01:42.012435   72173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:01:42.047878   72173 api_server.go:72] duration metric: took 2.036298602s to wait for apiserver process to appear ...
	I1014 15:01:42.047909   72173 api_server.go:88] waiting for apiserver healthz status ...
	I1014 15:01:42.047935   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:01:44.298692   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 15:01:44.298726   72173 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 15:01:44.298743   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:01:44.317315   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 15:01:44.317353   72173 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 15:01:44.548651   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:01:44.559477   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:01:44.559513   72173 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:01:45.048060   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:01:45.057070   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:01:45.057099   72173 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:01:45.548344   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:01:45.552611   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:01:45.552640   72173 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:01:46.048314   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:01:46.054943   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 200:
	ok
	I1014 15:01:46.062740   72173 api_server.go:141] control plane version: v1.31.1
	I1014 15:01:46.062769   72173 api_server.go:131] duration metric: took 4.014851988s to wait for apiserver health ...
	I1014 15:01:46.062779   72173 cni.go:84] Creating CNI manager for ""
	I1014 15:01:46.062785   72173 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:01:46.064824   72173 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 15:01:41.731928   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:41.732483   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:41.732515   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:41.732435   73405 retry.go:31] will retry after 2.349662063s: waiting for machine to come up
	I1014 15:01:44.083975   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:44.084492   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:44.084523   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:44.084437   73405 retry.go:31] will retry after 3.472214726s: waiting for machine to come up
	I1014 15:01:46.066505   72173 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 15:01:46.092975   72173 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 15:01:46.123873   72173 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 15:01:46.142575   72173 system_pods.go:59] 8 kube-system pods found
	I1014 15:01:46.142636   72173 system_pods.go:61] "coredns-7c65d6cfc9-r8x9s" [5a00095c-8777-412a-a7af-319a03d6153e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 15:01:46.142647   72173 system_pods.go:61] "etcd-embed-certs-989166" [981d2f54-f128-4527-a7cb-a6b9c647740b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 15:01:46.142658   72173 system_pods.go:61] "kube-apiserver-embed-certs-989166" [31780b5a-6ebf-4c75-bd27-64a95193827f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 15:01:46.142668   72173 system_pods.go:61] "kube-controller-manager-embed-certs-989166" [345e7656-579a-4be9-bcf0-4117880a2988] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 15:01:46.142678   72173 system_pods.go:61] "kube-proxy-7p84k" [5d8243a8-7247-490f-9102-61008a614a67] Running
	I1014 15:01:46.142685   72173 system_pods.go:61] "kube-scheduler-embed-certs-989166" [53b4b4a4-74ec-485e-99e3-b53c2edc80ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 15:01:46.142695   72173 system_pods.go:61] "metrics-server-6867b74b74-zc8zh" [5abf22c7-d271-4c3a-8e0e-cd867142cee1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:01:46.142703   72173 system_pods.go:61] "storage-provisioner" [6860efa4-c72f-477f-b9e1-e90ddcd112b5] Running
	I1014 15:01:46.142711   72173 system_pods.go:74] duration metric: took 18.811157ms to wait for pod list to return data ...
	I1014 15:01:46.142722   72173 node_conditions.go:102] verifying NodePressure condition ...
	I1014 15:01:46.154420   72173 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 15:01:46.154449   72173 node_conditions.go:123] node cpu capacity is 2
	I1014 15:01:46.154463   72173 node_conditions.go:105] duration metric: took 11.735142ms to run NodePressure ...
	I1014 15:01:46.154483   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:46.417106   72173 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1014 15:01:46.422102   72173 kubeadm.go:739] kubelet initialised
	I1014 15:01:46.422127   72173 kubeadm.go:740] duration metric: took 4.991248ms waiting for restarted kubelet to initialise ...
	I1014 15:01:46.422135   72173 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:01:46.428014   72173 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-r8x9s" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:46.432946   72173 pod_ready.go:98] node "embed-certs-989166" hosting pod "coredns-7c65d6cfc9-r8x9s" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.432965   72173 pod_ready.go:82] duration metric: took 4.927935ms for pod "coredns-7c65d6cfc9-r8x9s" in "kube-system" namespace to be "Ready" ...
	E1014 15:01:46.432972   72173 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-989166" hosting pod "coredns-7c65d6cfc9-r8x9s" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.432979   72173 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:46.441849   72173 pod_ready.go:98] node "embed-certs-989166" hosting pod "etcd-embed-certs-989166" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.441868   72173 pod_ready.go:82] duration metric: took 8.882863ms for pod "etcd-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	E1014 15:01:46.441877   72173 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-989166" hosting pod "etcd-embed-certs-989166" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.441883   72173 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:46.446863   72173 pod_ready.go:98] node "embed-certs-989166" hosting pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.446891   72173 pod_ready.go:82] duration metric: took 4.997658ms for pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	E1014 15:01:46.446912   72173 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-989166" hosting pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.446922   72173 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:46.526949   72173 pod_ready.go:98] node "embed-certs-989166" hosting pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.526972   72173 pod_ready.go:82] duration metric: took 80.035898ms for pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	E1014 15:01:46.526981   72173 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-989166" hosting pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.526987   72173 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-7p84k" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:46.927217   72173 pod_ready.go:93] pod "kube-proxy-7p84k" in "kube-system" namespace has status "Ready":"True"
	I1014 15:01:46.927249   72173 pod_ready.go:82] duration metric: took 400.252417ms for pod "kube-proxy-7p84k" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:46.927263   72173 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:48.933034   72173 pod_ready.go:103] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"False"
	I1014 15:01:47.558671   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:47.559112   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:47.559143   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:47.559067   73405 retry.go:31] will retry after 3.421253013s: waiting for machine to come up
	I1014 15:01:50.981602   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:50.982143   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has current primary IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:50.982167   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Found IP for machine: 192.168.50.128
	I1014 15:01:50.982186   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Reserving static IP address...
	I1014 15:01:50.982682   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-201291", mac: "52:54:00:23:03:c4", ip: "192.168.50.128"} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:50.982703   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Reserved static IP address: 192.168.50.128
	I1014 15:01:50.982722   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | skip adding static IP to network mk-default-k8s-diff-port-201291 - found existing host DHCP lease matching {name: "default-k8s-diff-port-201291", mac: "52:54:00:23:03:c4", ip: "192.168.50.128"}
	I1014 15:01:50.982743   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Getting to WaitForSSH function...
	I1014 15:01:50.982781   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for SSH to be available...
	I1014 15:01:50.985084   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:50.985609   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:50.985640   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:50.985750   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Using SSH client type: external
	I1014 15:01:50.985778   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa (-rw-------)
	I1014 15:01:50.985814   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.128 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 15:01:50.985832   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | About to run SSH command:
	I1014 15:01:50.985849   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | exit 0
	I1014 15:01:51.123927   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | SSH cmd err, output: <nil>: 
	I1014 15:01:51.124457   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetConfigRaw
	I1014 15:01:51.125106   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetIP
	I1014 15:01:51.128286   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.128716   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.128770   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.129045   72390 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/config.json ...
	I1014 15:01:51.129283   72390 machine.go:93] provisionDockerMachine start ...
	I1014 15:01:51.129308   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:51.129551   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.131756   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.132164   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.132207   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.132488   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:51.132701   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.132873   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.133022   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:51.133181   72390 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:51.133421   72390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I1014 15:01:51.133436   72390 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 15:01:51.244659   72390 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 15:01:51.244691   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetMachineName
	I1014 15:01:51.244923   72390 buildroot.go:166] provisioning hostname "default-k8s-diff-port-201291"
	I1014 15:01:51.244953   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetMachineName
	I1014 15:01:51.245149   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.248061   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.248429   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.248463   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.248521   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:51.248697   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.248887   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.249034   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:51.249227   72390 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:51.249448   72390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I1014 15:01:51.249463   72390 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-201291 && echo "default-k8s-diff-port-201291" | sudo tee /etc/hostname
	I1014 15:01:51.373260   72390 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-201291
	
	I1014 15:01:51.373293   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.376195   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.376528   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.376549   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.376752   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:51.376962   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.377159   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.377296   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:51.377446   72390 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:51.377657   72390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I1014 15:01:51.377676   72390 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-201291' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-201291/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-201291' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 15:01:52.179441   72639 start.go:364] duration metric: took 3m34.072351032s to acquireMachinesLock for "old-k8s-version-399767"
	I1014 15:01:52.179497   72639 start.go:96] Skipping create...Using existing machine configuration
	I1014 15:01:52.179505   72639 fix.go:54] fixHost starting: 
	I1014 15:01:52.179834   72639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:01:52.179873   72639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:01:52.196724   72639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39389
	I1014 15:01:52.197171   72639 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:01:52.197649   72639 main.go:141] libmachine: Using API Version  1
	I1014 15:01:52.197673   72639 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:01:52.198010   72639 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:01:52.198191   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:01:52.198337   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetState
	I1014 15:01:52.199789   72639 fix.go:112] recreateIfNeeded on old-k8s-version-399767: state=Stopped err=<nil>
	I1014 15:01:52.199826   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	W1014 15:01:52.199998   72639 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 15:01:52.202220   72639 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-399767" ...
	I1014 15:01:52.203601   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .Start
	I1014 15:01:52.203771   72639 main.go:141] libmachine: (old-k8s-version-399767) Ensuring networks are active...
	I1014 15:01:52.204575   72639 main.go:141] libmachine: (old-k8s-version-399767) Ensuring network default is active
	I1014 15:01:52.204971   72639 main.go:141] libmachine: (old-k8s-version-399767) Ensuring network mk-old-k8s-version-399767 is active
	I1014 15:01:52.205326   72639 main.go:141] libmachine: (old-k8s-version-399767) Getting domain xml...
	I1014 15:01:52.206026   72639 main.go:141] libmachine: (old-k8s-version-399767) Creating domain...
	I1014 15:01:51.488446   72390 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 15:01:51.488486   72390 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 15:01:51.488535   72390 buildroot.go:174] setting up certificates
	I1014 15:01:51.488553   72390 provision.go:84] configureAuth start
	I1014 15:01:51.488570   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetMachineName
	I1014 15:01:51.488867   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetIP
	I1014 15:01:51.491749   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.492141   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.492171   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.492351   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.494197   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.494498   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.494524   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.494693   72390 provision.go:143] copyHostCerts
	I1014 15:01:51.494745   72390 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 15:01:51.494764   72390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 15:01:51.494834   72390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 15:01:51.494945   72390 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 15:01:51.494958   72390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 15:01:51.494992   72390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 15:01:51.495081   72390 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 15:01:51.495095   72390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 15:01:51.495122   72390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 15:01:51.495214   72390 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-201291 san=[127.0.0.1 192.168.50.128 default-k8s-diff-port-201291 localhost minikube]
	I1014 15:01:51.567041   72390 provision.go:177] copyRemoteCerts
	I1014 15:01:51.567098   72390 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 15:01:51.567121   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.570006   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.570340   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.570368   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.570562   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:51.570769   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.570941   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:51.571047   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:01:51.652956   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 15:01:51.677959   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1014 15:01:51.702009   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 15:01:51.727016   72390 provision.go:87] duration metric: took 238.449189ms to configureAuth
	I1014 15:01:51.727043   72390 buildroot.go:189] setting minikube options for container-runtime
	I1014 15:01:51.727207   72390 config.go:182] Loaded profile config "default-k8s-diff-port-201291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 15:01:51.727276   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.729742   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.730043   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.730065   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.730242   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:51.730418   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.730578   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.730735   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:51.730891   72390 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:51.731097   72390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I1014 15:01:51.731114   72390 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 15:01:51.942847   72390 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 15:01:51.942874   72390 machine.go:96] duration metric: took 813.575194ms to provisionDockerMachine
	I1014 15:01:51.942888   72390 start.go:293] postStartSetup for "default-k8s-diff-port-201291" (driver="kvm2")
	I1014 15:01:51.942903   72390 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 15:01:51.942926   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:51.943250   72390 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 15:01:51.943283   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.946246   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.946608   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.946638   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.946799   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:51.946984   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.947165   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:51.947293   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:01:52.030124   72390 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 15:01:52.034493   72390 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 15:01:52.034525   72390 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 15:01:52.034625   72390 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 15:01:52.034740   72390 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 15:01:52.034834   72390 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 15:01:52.044919   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:01:52.068326   72390 start.go:296] duration metric: took 125.426221ms for postStartSetup
	I1014 15:01:52.068370   72390 fix.go:56] duration metric: took 19.832650283s for fixHost
	I1014 15:01:52.068394   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:52.070949   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.071362   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:52.071388   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.071588   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:52.071788   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:52.071908   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:52.072065   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:52.072231   72390 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:52.072449   72390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I1014 15:01:52.072468   72390 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 15:01:52.179264   72390 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728918112.149610573
	
	I1014 15:01:52.179291   72390 fix.go:216] guest clock: 1728918112.149610573
	I1014 15:01:52.179301   72390 fix.go:229] Guest: 2024-10-14 15:01:52.149610573 +0000 UTC Remote: 2024-10-14 15:01:52.06837553 +0000 UTC m=+235.685992564 (delta=81.235043ms)
	I1014 15:01:52.179349   72390 fix.go:200] guest clock delta is within tolerance: 81.235043ms
	I1014 15:01:52.179354   72390 start.go:83] releasing machines lock for "default-k8s-diff-port-201291", held for 19.943664398s
	I1014 15:01:52.179387   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:52.179666   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetIP
	I1014 15:01:52.182457   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.182834   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:52.182861   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.183000   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:52.183598   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:52.183784   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:52.183883   72390 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 15:01:52.183928   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:52.183993   72390 ssh_runner.go:195] Run: cat /version.json
	I1014 15:01:52.184017   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:52.186499   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.186692   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.186890   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:52.186915   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.187021   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:52.187050   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.187086   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:52.187288   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:52.187331   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:52.187479   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:52.187485   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:52.187597   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:01:52.187688   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:52.187843   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:01:52.264102   72390 ssh_runner.go:195] Run: systemctl --version
	I1014 15:01:52.291233   72390 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 15:01:52.443318   72390 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 15:01:52.450321   72390 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 15:01:52.450400   72390 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 15:01:52.467949   72390 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 15:01:52.467975   72390 start.go:495] detecting cgroup driver to use...
	I1014 15:01:52.468039   72390 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 15:01:52.485758   72390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 15:01:52.500662   72390 docker.go:217] disabling cri-docker service (if available) ...
	I1014 15:01:52.500729   72390 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 15:01:52.520846   72390 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 15:01:52.535606   72390 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 15:01:52.671062   72390 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 15:01:52.845631   72390 docker.go:233] disabling docker service ...
	I1014 15:01:52.845694   72390 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 15:01:52.867403   72390 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 15:01:52.882344   72390 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 15:01:53.020570   72390 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 15:01:53.157941   72390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 15:01:53.174989   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 15:01:53.195729   72390 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 15:01:53.195799   72390 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.207613   72390 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 15:01:53.207671   72390 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.218838   72390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.231186   72390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.247521   72390 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 15:01:53.258128   72390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.269119   72390 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.287810   72390 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.298576   72390 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 15:01:53.308114   72390 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 15:01:53.308169   72390 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 15:01:53.322207   72390 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 15:01:53.332284   72390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:01:53.483702   72390 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 15:01:53.581260   72390 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 15:01:53.581341   72390 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 15:01:53.586042   72390 start.go:563] Will wait 60s for crictl version
	I1014 15:01:53.586105   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:01:53.589931   72390 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 15:01:53.634776   72390 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 15:01:53.634864   72390 ssh_runner.go:195] Run: crio --version
	I1014 15:01:53.664242   72390 ssh_runner.go:195] Run: crio --version
	I1014 15:01:53.698374   72390 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1014 15:01:50.933590   72173 pod_ready.go:103] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"False"
	I1014 15:01:52.935445   72173 pod_ready.go:103] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"False"
	I1014 15:01:53.699730   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetIP
	I1014 15:01:53.702837   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:53.703224   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:53.703245   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:53.703528   72390 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1014 15:01:53.707720   72390 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:01:53.721953   72390 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-201291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-201291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.128 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 15:01:53.722106   72390 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 15:01:53.722165   72390 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:01:53.779083   72390 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1014 15:01:53.779139   72390 ssh_runner.go:195] Run: which lz4
	I1014 15:01:53.783197   72390 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 15:01:53.787515   72390 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 15:01:53.787549   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1014 15:01:55.277150   72390 crio.go:462] duration metric: took 1.493980352s to copy over tarball
	I1014 15:01:55.277212   72390 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 15:01:53.506315   72639 main.go:141] libmachine: (old-k8s-version-399767) Waiting to get IP...
	I1014 15:01:53.507576   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:53.508228   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:53.508297   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:53.508202   73581 retry.go:31] will retry after 220.59125ms: waiting for machine to come up
	I1014 15:01:53.730853   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:53.731286   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:53.731339   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:53.731257   73581 retry.go:31] will retry after 321.559387ms: waiting for machine to come up
	I1014 15:01:54.054891   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:54.055482   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:54.055509   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:54.055443   73581 retry.go:31] will retry after 444.912998ms: waiting for machine to come up
	I1014 15:01:54.502125   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:54.502479   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:54.502525   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:54.502462   73581 retry.go:31] will retry after 600.214254ms: waiting for machine to come up
	I1014 15:01:55.104962   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:55.105479   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:55.105504   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:55.105425   73581 retry.go:31] will retry after 686.77698ms: waiting for machine to come up
	I1014 15:01:55.794125   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:55.794825   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:55.794871   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:55.794717   73581 retry.go:31] will retry after 926.146146ms: waiting for machine to come up
	I1014 15:01:56.722712   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:56.723153   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:56.723183   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:56.723112   73581 retry.go:31] will retry after 1.108272037s: waiting for machine to come up
	I1014 15:01:57.832729   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:57.833304   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:57.833356   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:57.833279   73581 retry.go:31] will retry after 1.442737664s: waiting for machine to come up
	I1014 15:01:55.435691   72173 pod_ready.go:103] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"False"
	I1014 15:01:57.933561   72173 pod_ready.go:103] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"False"
	I1014 15:01:57.424526   72390 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.147277316s)
	I1014 15:01:57.424559   72390 crio.go:469] duration metric: took 2.147385522s to extract the tarball
	I1014 15:01:57.424566   72390 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 15:01:57.461792   72390 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:01:57.504424   72390 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 15:01:57.504450   72390 cache_images.go:84] Images are preloaded, skipping loading
	I1014 15:01:57.504460   72390 kubeadm.go:934] updating node { 192.168.50.128 8444 v1.31.1 crio true true} ...
	I1014 15:01:57.504656   72390 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-201291 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-201291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 15:01:57.504759   72390 ssh_runner.go:195] Run: crio config
	I1014 15:01:57.555431   72390 cni.go:84] Creating CNI manager for ""
	I1014 15:01:57.555453   72390 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:01:57.555462   72390 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 15:01:57.555482   72390 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.128 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-201291 NodeName:default-k8s-diff-port-201291 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.128"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.128 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 15:01:57.555593   72390 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.128
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-201291"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.128"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.128"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 15:01:57.555652   72390 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 15:01:57.565953   72390 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 15:01:57.566025   72390 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 15:01:57.576141   72390 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1014 15:01:57.594855   72390 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 15:01:57.611249   72390 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1014 15:01:57.628363   72390 ssh_runner.go:195] Run: grep 192.168.50.128	control-plane.minikube.internal$ /etc/hosts
	I1014 15:01:57.632552   72390 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.128	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:01:57.645588   72390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:01:57.769192   72390 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:01:57.787654   72390 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291 for IP: 192.168.50.128
	I1014 15:01:57.787677   72390 certs.go:194] generating shared ca certs ...
	I1014 15:01:57.787695   72390 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:01:57.787865   72390 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 15:01:57.787916   72390 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 15:01:57.787930   72390 certs.go:256] generating profile certs ...
	I1014 15:01:57.788084   72390 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/client.key
	I1014 15:01:57.788174   72390 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/apiserver.key.517dfce8
	I1014 15:01:57.788223   72390 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/proxy-client.key
	I1014 15:01:57.788371   72390 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 15:01:57.788407   72390 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 15:01:57.788417   72390 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 15:01:57.788439   72390 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 15:01:57.788460   72390 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 15:01:57.788482   72390 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 15:01:57.788521   72390 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:01:57.789141   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 15:01:57.821159   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 15:01:57.875530   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 15:01:57.902687   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 15:01:57.935658   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1014 15:01:57.961987   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 15:01:57.987107   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 15:01:58.013544   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 15:01:58.039793   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 15:01:58.071154   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 15:01:58.102574   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 15:01:58.127398   72390 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 15:01:58.144906   72390 ssh_runner.go:195] Run: openssl version
	I1014 15:01:58.150817   72390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 15:01:58.162122   72390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 15:01:58.167170   72390 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 15:01:58.167240   72390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 15:01:58.173692   72390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 15:01:58.185769   72390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 15:01:58.197045   72390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:01:58.201652   72390 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:01:58.201716   72390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:01:58.207559   72390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 15:01:58.218921   72390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 15:01:58.230822   72390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 15:01:58.235774   72390 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 15:01:58.235832   72390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 15:01:58.241546   72390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 15:01:58.252618   72390 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 15:01:58.257509   72390 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 15:01:58.263891   72390 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 15:01:58.270085   72390 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 15:01:58.276427   72390 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 15:01:58.282346   72390 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 15:01:58.288396   72390 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 15:01:58.294386   72390 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-201291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-201291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.128 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 15:01:58.294472   72390 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 15:01:58.294517   72390 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:01:58.342008   72390 cri.go:89] found id: ""
	I1014 15:01:58.342088   72390 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 15:01:58.352478   72390 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1014 15:01:58.352512   72390 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1014 15:01:58.352566   72390 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 15:01:58.363158   72390 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 15:01:58.364106   72390 kubeconfig.go:125] found "default-k8s-diff-port-201291" server: "https://192.168.50.128:8444"
	I1014 15:01:58.366079   72390 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 15:01:58.375635   72390 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.128
	I1014 15:01:58.375666   72390 kubeadm.go:1160] stopping kube-system containers ...
	I1014 15:01:58.375680   72390 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1014 15:01:58.375733   72390 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:01:58.411846   72390 cri.go:89] found id: ""
	I1014 15:01:58.411923   72390 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1014 15:01:58.428602   72390 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:01:58.439214   72390 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:01:58.439239   72390 kubeadm.go:157] found existing configuration files:
	
	I1014 15:01:58.439293   72390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1014 15:01:58.448475   72390 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:01:58.448528   72390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:01:58.457816   72390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1014 15:01:58.467279   72390 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:01:58.467352   72390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:01:58.477479   72390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1014 15:01:58.487899   72390 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:01:58.487968   72390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:01:58.498296   72390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1014 15:01:58.507910   72390 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:01:58.507977   72390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:01:58.517901   72390 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:01:58.527983   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:58.654226   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:59.576099   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:59.790552   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:59.879043   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:59.963369   72390 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:01:59.963462   72390 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:00.464403   72390 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:00.963891   72390 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:00.994849   72390 api_server.go:72] duration metric: took 1.031477803s to wait for apiserver process to appear ...
	I1014 15:02:00.994875   72390 api_server.go:88] waiting for apiserver healthz status ...
	I1014 15:02:00.994897   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:01:59.278031   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:59.278558   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:59.278586   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:59.278519   73581 retry.go:31] will retry after 1.187069828s: waiting for machine to come up
	I1014 15:02:00.467810   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:00.468237   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:02:00.468267   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:02:00.468195   73581 retry.go:31] will retry after 1.667312665s: waiting for machine to come up
	I1014 15:02:02.137067   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:02.137569   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:02:02.137590   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:02:02.137530   73581 retry.go:31] will retry after 1.910892221s: waiting for machine to come up
	I1014 15:01:59.994818   72173 pod_ready.go:103] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:00.130085   72173 pod_ready.go:93] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:00.130109   72173 pod_ready.go:82] duration metric: took 13.202838085s for pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:00.130121   72173 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:02.142821   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:03.649728   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 15:02:03.649764   72390 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 15:02:03.649780   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:02:03.754772   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:03.754805   72390 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:02:03.995106   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:02:04.020015   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:04.020040   72390 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:02:04.495270   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:02:04.501643   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:04.501694   72390 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:02:04.995049   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:02:05.002865   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:05.002893   72390 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:02:05.495412   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:02:05.499936   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 200:
	ok
	I1014 15:02:05.506656   72390 api_server.go:141] control plane version: v1.31.1
	I1014 15:02:05.506685   72390 api_server.go:131] duration metric: took 4.511803211s to wait for apiserver health ...
	I1014 15:02:05.506694   72390 cni.go:84] Creating CNI manager for ""
	I1014 15:02:05.506700   72390 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:02:05.508420   72390 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 15:02:05.509685   72390 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 15:02:05.521314   72390 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 15:02:05.543021   72390 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 15:02:05.553508   72390 system_pods.go:59] 8 kube-system pods found
	I1014 15:02:05.553539   72390 system_pods.go:61] "coredns-7c65d6cfc9-994hx" [b0291ce4-5503-4bb1-8e36-d956b115c3ac] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 15:02:05.553548   72390 system_pods.go:61] "etcd-default-k8s-diff-port-201291" [5e359915-fb2e-46d5-a1a8-826341943fc3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 15:02:05.553555   72390 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-201291" [047bd813-aaab-428e-ab47-12932195c91f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 15:02:05.553562   72390 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-201291" [6eb0eb91-21ce-4e56-9758-fbd453b0d4df] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 15:02:05.553567   72390 system_pods.go:61] "kube-proxy-rh82t" [1dcd3c39-1bfe-40ac-a012-ea17ea1dfb6d] Running
	I1014 15:02:05.553572   72390 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-201291" [aaeefd23-6adc-4c69-acca-38e3f3172b2e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 15:02:05.553577   72390 system_pods.go:61] "metrics-server-6867b74b74-bcrqs" [508697cd-cf31-4078-8985-5c0b77966695] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:02:05.553581   72390 system_pods.go:61] "storage-provisioner" [62925b5e-ec1d-4d5b-aa70-a4fc555db52d] Running
	I1014 15:02:05.553587   72390 system_pods.go:74] duration metric: took 10.544168ms to wait for pod list to return data ...
	I1014 15:02:05.553593   72390 node_conditions.go:102] verifying NodePressure condition ...
	I1014 15:02:05.558889   72390 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 15:02:05.558917   72390 node_conditions.go:123] node cpu capacity is 2
	I1014 15:02:05.558929   72390 node_conditions.go:105] duration metric: took 5.331009ms to run NodePressure ...
	I1014 15:02:05.558948   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:05.819037   72390 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1014 15:02:05.826431   72390 kubeadm.go:739] kubelet initialised
	I1014 15:02:05.826456   72390 kubeadm.go:740] duration metric: took 7.391664ms waiting for restarted kubelet to initialise ...
	I1014 15:02:05.826463   72390 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:02:05.833547   72390 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:05.840150   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.840175   72390 pod_ready.go:82] duration metric: took 6.599969ms for pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:05.840186   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.840205   72390 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:05.850319   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.850346   72390 pod_ready.go:82] duration metric: took 10.130163ms for pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:05.850359   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.850368   72390 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:05.857192   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.857215   72390 pod_ready.go:82] duration metric: took 6.838793ms for pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:05.857228   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.857237   72390 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:05.946611   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.946646   72390 pod_ready.go:82] duration metric: took 89.397304ms for pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:05.946663   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.946674   72390 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rh82t" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:06.346368   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "kube-proxy-rh82t" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:06.346400   72390 pod_ready.go:82] duration metric: took 399.71513ms for pod "kube-proxy-rh82t" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:06.346413   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "kube-proxy-rh82t" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:06.346423   72390 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:06.746899   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:06.746928   72390 pod_ready.go:82] duration metric: took 400.494872ms for pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:06.746941   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:06.746951   72390 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:07.146147   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:07.146175   72390 pod_ready.go:82] duration metric: took 399.215075ms for pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:07.146199   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:07.146215   72390 pod_ready.go:39] duration metric: took 1.319742206s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:02:07.146237   72390 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 15:02:07.158049   72390 ops.go:34] apiserver oom_adj: -16
	I1014 15:02:07.158072   72390 kubeadm.go:597] duration metric: took 8.805549392s to restartPrimaryControlPlane
	I1014 15:02:07.158082   72390 kubeadm.go:394] duration metric: took 8.863707122s to StartCluster
	I1014 15:02:07.158102   72390 settings.go:142] acquiring lock: {Name:mk9f6f6b9dc8c3435472077cbb9091b8e648d1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:02:07.158192   72390 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 15:02:07.159622   72390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/kubeconfig: {Name:mk17dd47b52fd7a8ee17f563c35b22ef1b7788f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:02:07.159917   72390 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.128 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 15:02:07.159968   72390 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 15:02:07.160052   72390 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-201291"
	I1014 15:02:07.160074   72390 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-201291"
	W1014 15:02:07.160086   72390 addons.go:243] addon storage-provisioner should already be in state true
	I1014 15:02:07.160125   72390 host.go:66] Checking if "default-k8s-diff-port-201291" exists ...
	I1014 15:02:07.160133   72390 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-201291"
	I1014 15:02:07.160166   72390 config.go:182] Loaded profile config "default-k8s-diff-port-201291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 15:02:07.160181   72390 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-201291"
	I1014 15:02:07.160179   72390 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-201291"
	I1014 15:02:07.160228   72390 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-201291"
	W1014 15:02:07.160251   72390 addons.go:243] addon metrics-server should already be in state true
	I1014 15:02:07.160312   72390 host.go:66] Checking if "default-k8s-diff-port-201291" exists ...
	I1014 15:02:07.160472   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.160508   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.160692   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.160712   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.160729   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.160770   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.161892   72390 out.go:177] * Verifying Kubernetes components...
	I1014 15:02:07.163368   72390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:02:07.176101   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36801
	I1014 15:02:07.176351   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44737
	I1014 15:02:07.176705   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.176834   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.177272   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.177298   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.177392   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.177413   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.177600   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43091
	I1014 15:02:07.177639   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.177703   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.178070   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.178181   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.178244   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.178252   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.178285   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.178566   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.178590   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.178944   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.179107   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetState
	I1014 15:02:07.181971   72390 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-201291"
	W1014 15:02:07.181989   72390 addons.go:243] addon default-storageclass should already be in state true
	I1014 15:02:07.182024   72390 host.go:66] Checking if "default-k8s-diff-port-201291" exists ...
	I1014 15:02:07.182278   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.182322   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.194707   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36383
	I1014 15:02:07.195401   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.196015   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.196043   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.196413   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.196511   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35479
	I1014 15:02:07.196618   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetState
	I1014 15:02:07.196977   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.197479   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.197497   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.197520   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41695
	I1014 15:02:07.197848   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.197981   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.198048   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetState
	I1014 15:02:07.198544   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.198567   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.198636   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:02:07.199017   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.199817   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.199824   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:02:07.199864   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.200860   72390 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:07.201674   72390 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1014 15:02:04.050521   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:04.051060   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:02:04.051099   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:02:04.051015   73581 retry.go:31] will retry after 2.29433775s: waiting for machine to come up
	I1014 15:02:06.347519   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:06.347985   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:02:06.348004   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:02:06.347945   73581 retry.go:31] will retry after 3.499922823s: waiting for machine to come up
	I1014 15:02:07.202461   72390 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 15:02:07.202476   72390 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 15:02:07.202491   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:02:07.203259   72390 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1014 15:02:07.203275   72390 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1014 15:02:07.203292   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:02:07.205760   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:02:07.206124   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:02:07.206150   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:02:07.206375   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:02:07.206533   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:02:07.206676   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:02:07.206729   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:02:07.206858   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:02:07.207134   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:02:07.207150   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:02:07.207248   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:02:07.207455   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:02:07.207559   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:02:07.207677   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:02:07.219554   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38833
	I1014 15:02:07.220070   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.220483   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.220508   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.220842   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.221004   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetState
	I1014 15:02:07.222706   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:02:07.222961   72390 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 15:02:07.222979   72390 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 15:02:07.222997   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:02:07.225715   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:02:07.226209   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:02:07.226250   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:02:07.226551   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:02:07.226964   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:02:07.227118   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:02:07.227254   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:02:07.362105   72390 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:02:07.384279   72390 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-201291" to be "Ready" ...
	I1014 15:02:07.438536   72390 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 15:02:07.551868   72390 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1014 15:02:07.551897   72390 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1014 15:02:07.606347   72390 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 15:02:07.656287   72390 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1014 15:02:07.656313   72390 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1014 15:02:07.687002   72390 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 15:02:07.687027   72390 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1014 15:02:07.751715   72390 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 15:02:07.810869   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:07.810902   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:07.811193   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Closing plugin on server side
	I1014 15:02:07.811247   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:07.811262   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:07.811273   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:07.811281   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:07.811546   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:07.811562   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:07.811576   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Closing plugin on server side
	I1014 15:02:07.819897   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:07.819917   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:07.820156   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:07.820206   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:07.820179   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Closing plugin on server side
	I1014 15:02:08.581553   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:08.581583   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:08.581902   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Closing plugin on server side
	I1014 15:02:08.581943   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:08.581955   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:08.581974   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:08.581986   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:08.582197   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:08.582211   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:08.595214   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:08.595242   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:08.595493   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Closing plugin on server side
	I1014 15:02:08.595569   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:08.595589   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:08.595609   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:08.595623   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:08.595833   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:08.595847   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:08.595864   72390 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-201291"
	I1014 15:02:08.597967   72390 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1014 15:02:04.638029   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:07.139428   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:11.248505   71679 start.go:364] duration metric: took 53.170862497s to acquireMachinesLock for "no-preload-813300"
	I1014 15:02:11.248567   71679 start.go:96] Skipping create...Using existing machine configuration
	I1014 15:02:11.248581   71679 fix.go:54] fixHost starting: 
	I1014 15:02:11.248978   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:11.249022   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:11.266270   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39251
	I1014 15:02:11.266780   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:11.267302   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:02:11.267319   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:11.267675   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:11.267842   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:11.267984   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetState
	I1014 15:02:11.269459   71679 fix.go:112] recreateIfNeeded on no-preload-813300: state=Stopped err=<nil>
	I1014 15:02:11.269484   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	W1014 15:02:11.269589   71679 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 15:02:11.271434   71679 out.go:177] * Restarting existing kvm2 VM for "no-preload-813300" ...
	I1014 15:02:08.599138   72390 addons.go:510] duration metric: took 1.439175047s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1014 15:02:09.388573   72390 node_ready.go:53] node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:09.851017   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.851562   72639 main.go:141] libmachine: (old-k8s-version-399767) Found IP for machine: 192.168.72.138
	I1014 15:02:09.851582   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has current primary IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.851587   72639 main.go:141] libmachine: (old-k8s-version-399767) Reserving static IP address...
	I1014 15:02:09.851961   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "old-k8s-version-399767", mac: "52:54:00:87:01:70", ip: "192.168.72.138"} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:09.851991   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | skip adding static IP to network mk-old-k8s-version-399767 - found existing host DHCP lease matching {name: "old-k8s-version-399767", mac: "52:54:00:87:01:70", ip: "192.168.72.138"}
	I1014 15:02:09.852009   72639 main.go:141] libmachine: (old-k8s-version-399767) Reserved static IP address: 192.168.72.138
	I1014 15:02:09.852021   72639 main.go:141] libmachine: (old-k8s-version-399767) Waiting for SSH to be available...
	I1014 15:02:09.852031   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | Getting to WaitForSSH function...
	I1014 15:02:09.854039   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.854351   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:09.854378   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.854493   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | Using SSH client type: external
	I1014 15:02:09.854517   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa (-rw-------)
	I1014 15:02:09.854547   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.138 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 15:02:09.854559   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | About to run SSH command:
	I1014 15:02:09.854572   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | exit 0
	I1014 15:02:09.979174   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | SSH cmd err, output: <nil>: 
	I1014 15:02:09.979594   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetConfigRaw
	I1014 15:02:09.980252   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetIP
	I1014 15:02:09.983038   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.983469   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:09.983502   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.983891   72639 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/config.json ...
	I1014 15:02:09.984191   72639 machine.go:93] provisionDockerMachine start ...
	I1014 15:02:09.984220   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:09.984487   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:09.986947   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.987361   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:09.987389   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.987514   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:09.987682   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:09.987830   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:09.987924   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:09.988076   72639 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:09.988338   72639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1014 15:02:09.988352   72639 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 15:02:10.098944   72639 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 15:02:10.098968   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetMachineName
	I1014 15:02:10.099242   72639 buildroot.go:166] provisioning hostname "old-k8s-version-399767"
	I1014 15:02:10.099268   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetMachineName
	I1014 15:02:10.099437   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:10.101961   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.102298   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.102320   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.102468   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:10.102670   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.102846   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.102980   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:10.103124   72639 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:10.103337   72639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1014 15:02:10.103353   72639 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-399767 && echo "old-k8s-version-399767" | sudo tee /etc/hostname
	I1014 15:02:10.226037   72639 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-399767
	
	I1014 15:02:10.226069   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:10.228712   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.229059   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.229082   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.229228   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:10.229408   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.229549   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.229670   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:10.229804   72639 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:10.230001   72639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1014 15:02:10.230018   72639 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-399767' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-399767/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-399767' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 15:02:10.344175   72639 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 15:02:10.344206   72639 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 15:02:10.344270   72639 buildroot.go:174] setting up certificates
	I1014 15:02:10.344284   72639 provision.go:84] configureAuth start
	I1014 15:02:10.344302   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetMachineName
	I1014 15:02:10.344632   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetIP
	I1014 15:02:10.347200   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.347587   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.347623   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.347812   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:10.349962   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.350332   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.350364   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.350502   72639 provision.go:143] copyHostCerts
	I1014 15:02:10.350558   72639 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 15:02:10.350574   72639 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 15:02:10.350646   72639 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 15:02:10.350734   72639 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 15:02:10.350742   72639 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 15:02:10.350762   72639 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 15:02:10.350812   72639 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 15:02:10.350819   72639 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 15:02:10.350837   72639 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 15:02:10.350887   72639 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-399767 san=[127.0.0.1 192.168.72.138 localhost minikube old-k8s-version-399767]
	I1014 15:02:10.602118   72639 provision.go:177] copyRemoteCerts
	I1014 15:02:10.602175   72639 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 15:02:10.602199   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:10.604519   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.604744   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.604776   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.604946   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:10.605127   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.605273   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:10.605403   72639 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa Username:docker}
	I1014 15:02:10.689081   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 15:02:10.713512   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1014 15:02:10.738086   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 15:02:10.762274   72639 provision.go:87] duration metric: took 417.977128ms to configureAuth
	I1014 15:02:10.762307   72639 buildroot.go:189] setting minikube options for container-runtime
	I1014 15:02:10.762486   72639 config.go:182] Loaded profile config "old-k8s-version-399767": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1014 15:02:10.762552   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:10.765134   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.765442   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.765469   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.765600   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:10.765756   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.765903   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.765998   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:10.766131   72639 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:10.766297   72639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1014 15:02:10.766311   72639 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 15:02:11.011252   72639 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 15:02:11.011279   72639 machine.go:96] duration metric: took 1.027069423s to provisionDockerMachine
	I1014 15:02:11.011292   72639 start.go:293] postStartSetup for "old-k8s-version-399767" (driver="kvm2")
	I1014 15:02:11.011304   72639 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 15:02:11.011349   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:11.011716   72639 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 15:02:11.011751   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:11.014418   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.014754   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:11.014790   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.014946   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:11.015125   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:11.015260   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:11.015376   72639 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa Username:docker}
	I1014 15:02:11.097883   72639 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 15:02:11.102452   72639 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 15:02:11.102481   72639 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 15:02:11.102551   72639 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 15:02:11.102687   72639 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 15:02:11.102781   72639 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 15:02:11.112774   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:02:11.138211   72639 start.go:296] duration metric: took 126.906035ms for postStartSetup
	I1014 15:02:11.138247   72639 fix.go:56] duration metric: took 18.958741429s for fixHost
	I1014 15:02:11.138270   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:11.140740   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.141100   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:11.141139   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.141280   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:11.141484   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:11.141668   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:11.141811   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:11.141974   72639 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:11.142131   72639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1014 15:02:11.142141   72639 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 15:02:11.248330   72639 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728918131.224010283
	
	I1014 15:02:11.248355   72639 fix.go:216] guest clock: 1728918131.224010283
	I1014 15:02:11.248373   72639 fix.go:229] Guest: 2024-10-14 15:02:11.224010283 +0000 UTC Remote: 2024-10-14 15:02:11.138252894 +0000 UTC m=+233.173555624 (delta=85.757389ms)
	I1014 15:02:11.248399   72639 fix.go:200] guest clock delta is within tolerance: 85.757389ms
	I1014 15:02:11.248406   72639 start.go:83] releasing machines lock for "old-k8s-version-399767", held for 19.068928968s
	I1014 15:02:11.248434   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:11.248692   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetIP
	I1014 15:02:11.251774   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.252134   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:11.252176   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.252358   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:11.252840   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:11.253017   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:11.253104   72639 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 15:02:11.253150   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:11.253232   72639 ssh_runner.go:195] Run: cat /version.json
	I1014 15:02:11.253259   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:11.256105   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.256339   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.256504   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:11.256529   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.256662   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:11.256732   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:11.256771   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.256844   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:11.256932   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:11.257003   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:11.257141   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:11.257131   72639 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa Username:docker}
	I1014 15:02:11.257296   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:11.257414   72639 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa Username:docker}
	I1014 15:02:11.363838   72639 ssh_runner.go:195] Run: systemctl --version
	I1014 15:02:11.370414   72639 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 15:02:11.521232   72639 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 15:02:11.527623   72639 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 15:02:11.527712   72639 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 15:02:11.544532   72639 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 15:02:11.544559   72639 start.go:495] detecting cgroup driver to use...
	I1014 15:02:11.544614   72639 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 15:02:11.561693   72639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 15:02:11.576555   72639 docker.go:217] disabling cri-docker service (if available) ...
	I1014 15:02:11.576622   72639 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 15:02:11.593830   72639 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 15:02:11.608785   72639 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 15:02:11.731034   72639 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 15:02:11.909278   72639 docker.go:233] disabling docker service ...
	I1014 15:02:11.909359   72639 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 15:02:11.931218   72639 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 15:02:11.951710   72639 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 15:02:12.103012   72639 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 15:02:12.252290   72639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 15:02:12.270497   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 15:02:12.293240   72639 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1014 15:02:12.293297   72639 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:12.304881   72639 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 15:02:12.304958   72639 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:12.316294   72639 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:12.328591   72639 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:12.340085   72639 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 15:02:12.351765   72639 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 15:02:12.362454   72639 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 15:02:12.362525   72639 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 15:02:12.376865   72639 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 15:02:12.387779   72639 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:02:12.528541   72639 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 15:02:12.635262   72639 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 15:02:12.635335   72639 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 15:02:12.641070   72639 start.go:563] Will wait 60s for crictl version
	I1014 15:02:12.641121   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:12.645111   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 15:02:12.691103   72639 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 15:02:12.691199   72639 ssh_runner.go:195] Run: crio --version
	I1014 15:02:12.720182   72639 ssh_runner.go:195] Run: crio --version
	I1014 15:02:12.754856   72639 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1014 15:02:12.756005   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetIP
	I1014 15:02:12.759369   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:12.759890   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:12.759924   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:12.760164   72639 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1014 15:02:12.765342   72639 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:02:12.782182   72639 kubeadm.go:883] updating cluster {Name:old-k8s-version-399767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-399767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.138 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 15:02:12.782307   72639 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1014 15:02:12.782374   72639 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:02:12.841797   72639 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1014 15:02:12.841871   72639 ssh_runner.go:195] Run: which lz4
	I1014 15:02:12.846193   72639 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 15:02:12.850982   72639 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 15:02:12.851019   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1014 15:02:09.636366   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:11.637804   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:13.638684   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:11.272626   71679 main.go:141] libmachine: (no-preload-813300) Calling .Start
	I1014 15:02:11.272827   71679 main.go:141] libmachine: (no-preload-813300) Ensuring networks are active...
	I1014 15:02:11.273510   71679 main.go:141] libmachine: (no-preload-813300) Ensuring network default is active
	I1014 15:02:11.273954   71679 main.go:141] libmachine: (no-preload-813300) Ensuring network mk-no-preload-813300 is active
	I1014 15:02:11.274410   71679 main.go:141] libmachine: (no-preload-813300) Getting domain xml...
	I1014 15:02:11.275263   71679 main.go:141] libmachine: (no-preload-813300) Creating domain...
	I1014 15:02:12.614590   71679 main.go:141] libmachine: (no-preload-813300) Waiting to get IP...
	I1014 15:02:12.615572   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:12.616018   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:12.616092   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:12.616013   73776 retry.go:31] will retry after 302.312986ms: waiting for machine to come up
	I1014 15:02:12.919678   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:12.920039   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:12.920074   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:12.920005   73776 retry.go:31] will retry after 371.392955ms: waiting for machine to come up
	I1014 15:02:13.292596   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:13.293214   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:13.293244   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:13.293164   73776 retry.go:31] will retry after 299.379251ms: waiting for machine to come up
	I1014 15:02:13.594808   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:13.595344   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:13.595370   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:13.595297   73776 retry.go:31] will retry after 598.480386ms: waiting for machine to come up
	I1014 15:02:14.195149   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:14.195744   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:14.195775   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:14.195696   73776 retry.go:31] will retry after 567.581822ms: waiting for machine to come up
	I1014 15:02:14.764315   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:14.764863   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:14.764886   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:14.764815   73776 retry.go:31] will retry after 587.597591ms: waiting for machine to come up
	I1014 15:02:15.353495   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:15.353948   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:15.353980   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:15.353896   73776 retry.go:31] will retry after 1.024496536s: waiting for machine to come up
	I1014 15:02:11.889135   72390 node_ready.go:53] node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:13.889200   72390 node_ready.go:49] node "default-k8s-diff-port-201291" has status "Ready":"True"
	I1014 15:02:13.889228   72390 node_ready.go:38] duration metric: took 6.504919545s for node "default-k8s-diff-port-201291" to be "Ready" ...
	I1014 15:02:13.889240   72390 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:02:13.898112   72390 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:15.907127   72390 pod_ready.go:103] pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:14.579304   72639 crio.go:462] duration metric: took 1.733147869s to copy over tarball
	I1014 15:02:14.579405   72639 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 15:02:17.644891   72639 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.06545265s)
	I1014 15:02:17.644954   72639 crio.go:469] duration metric: took 3.065620277s to extract the tarball
	I1014 15:02:17.644979   72639 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 15:02:17.688304   72639 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:02:17.727862   72639 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1014 15:02:17.727888   72639 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1014 15:02:17.727984   72639 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:17.727995   72639 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:17.728006   72639 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:17.728036   72639 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:17.727986   72639 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:17.728104   72639 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1014 15:02:17.728169   72639 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1014 15:02:17.728267   72639 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:17.729900   72639 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:17.729941   72639 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:17.729954   72639 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:17.729900   72639 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1014 15:02:17.729984   72639 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:17.729999   72639 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1014 15:02:17.729913   72639 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:17.730335   72639 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:17.889181   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:17.912728   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:17.919124   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:17.920117   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:17.934314   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1014 15:02:17.951143   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:17.956588   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1014 15:02:17.964968   72639 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1014 15:02:17.965031   72639 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:17.965066   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:16.139535   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:18.637888   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:16.379768   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:16.380165   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:16.380236   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:16.380142   73776 retry.go:31] will retry after 1.022289492s: waiting for machine to come up
	I1014 15:02:17.403892   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:17.404406   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:17.404430   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:17.404383   73776 retry.go:31] will retry after 1.277226075s: waiting for machine to come up
	I1014 15:02:18.683704   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:18.684176   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:18.684200   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:18.684126   73776 retry.go:31] will retry after 2.146714263s: waiting for machine to come up
	I1014 15:02:18.406707   72390 pod_ready.go:103] pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:20.412201   72390 pod_ready.go:103] pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:21.406229   72390 pod_ready.go:93] pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:21.406256   72390 pod_ready.go:82] duration metric: took 7.508120497s for pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.406269   72390 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.413868   72390 pod_ready.go:93] pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:21.413896   72390 pod_ready.go:82] duration metric: took 7.618897ms for pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.413910   72390 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:18.041388   72639 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1014 15:02:18.041436   72639 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:18.041489   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.041504   72639 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1014 15:02:18.041540   72639 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:18.041579   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.069534   72639 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1014 15:02:18.069582   72639 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1014 15:02:18.069631   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.069794   72639 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1014 15:02:18.069821   72639 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:18.069852   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.096492   72639 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1014 15:02:18.096536   72639 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:18.096575   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.104764   72639 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1014 15:02:18.104810   72639 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1014 15:02:18.104816   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:18.104854   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.104876   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:18.104885   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:18.104980   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:18.104984   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1014 15:02:18.105025   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:18.119784   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1014 15:02:18.213816   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:18.241644   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:18.288717   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:18.288820   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1014 15:02:18.288931   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:18.289005   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:18.295481   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1014 15:02:18.376936   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:18.393755   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:18.449717   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:18.449798   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:18.449824   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1014 15:02:18.449904   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:18.461905   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1014 15:02:18.508804   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1014 15:02:18.521502   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1014 15:02:18.612103   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1014 15:02:18.613450   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1014 15:02:18.613548   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1014 15:02:18.613625   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1014 15:02:18.613715   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1014 15:02:18.741774   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:18.888495   72639 cache_images.go:92] duration metric: took 1.16058525s to LoadCachedImages
	W1014 15:02:18.888578   72639 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I1014 15:02:18.888594   72639 kubeadm.go:934] updating node { 192.168.72.138 8443 v1.20.0 crio true true} ...
	I1014 15:02:18.888707   72639 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-399767 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.138
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-399767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 15:02:18.888791   72639 ssh_runner.go:195] Run: crio config
	I1014 15:02:18.943058   72639 cni.go:84] Creating CNI manager for ""
	I1014 15:02:18.943082   72639 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:02:18.943091   72639 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 15:02:18.943108   72639 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.138 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-399767 NodeName:old-k8s-version-399767 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.138"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.138 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1014 15:02:18.943225   72639 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.138
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-399767"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.138
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.138"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 15:02:18.943285   72639 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1014 15:02:18.956635   72639 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 15:02:18.956727   72639 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 15:02:18.970846   72639 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1014 15:02:18.992163   72639 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 15:02:19.012061   72639 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1014 15:02:19.033158   72639 ssh_runner.go:195] Run: grep 192.168.72.138	control-plane.minikube.internal$ /etc/hosts
	I1014 15:02:19.037195   72639 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.138	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:02:19.051127   72639 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:02:19.172992   72639 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:02:19.190545   72639 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767 for IP: 192.168.72.138
	I1014 15:02:19.190572   72639 certs.go:194] generating shared ca certs ...
	I1014 15:02:19.190592   72639 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:02:19.190786   72639 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 15:02:19.190843   72639 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 15:02:19.190853   72639 certs.go:256] generating profile certs ...
	I1014 15:02:19.190973   72639 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/client.key
	I1014 15:02:19.191053   72639 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/apiserver.key.c5ef93ea
	I1014 15:02:19.191108   72639 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/proxy-client.key
	I1014 15:02:19.191264   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 15:02:19.191302   72639 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 15:02:19.191314   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 15:02:19.191345   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 15:02:19.191374   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 15:02:19.191423   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 15:02:19.191477   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:02:19.192328   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 15:02:19.248981   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 15:02:19.281262   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 15:02:19.312859   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 15:02:19.351940   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1014 15:02:19.405710   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 15:02:19.441313   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 15:02:19.481774   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 15:02:19.509433   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 15:02:19.537994   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 15:02:19.564460   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 15:02:19.593632   72639 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 15:02:19.614775   72639 ssh_runner.go:195] Run: openssl version
	I1014 15:02:19.623548   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 15:02:19.636680   72639 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:02:19.642225   72639 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:02:19.642286   72639 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:02:19.648609   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 15:02:19.661130   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 15:02:19.672988   72639 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 15:02:19.678119   72639 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 15:02:19.678189   72639 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 15:02:19.684583   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 15:02:19.696685   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 15:02:19.708338   72639 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 15:02:19.713443   72639 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 15:02:19.713502   72639 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 15:02:19.719482   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 15:02:19.731720   72639 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 15:02:19.739006   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 15:02:19.747558   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 15:02:19.756399   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 15:02:19.764987   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 15:02:19.773320   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 15:02:19.781239   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 15:02:19.788638   72639 kubeadm.go:392] StartCluster: {Name:old-k8s-version-399767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-399767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.138 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 15:02:19.788753   72639 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 15:02:19.788810   72639 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:02:19.829586   72639 cri.go:89] found id: ""
	I1014 15:02:19.829641   72639 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 15:02:19.844632   72639 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1014 15:02:19.844654   72639 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1014 15:02:19.844708   72639 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 15:02:19.860547   72639 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 15:02:19.861848   72639 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-399767" does not appear in /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 15:02:19.862755   72639 kubeconfig.go:62] /home/jenkins/minikube-integration/19790-7836/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-399767" cluster setting kubeconfig missing "old-k8s-version-399767" context setting]
	I1014 15:02:19.863757   72639 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/kubeconfig: {Name:mk17dd47b52fd7a8ee17f563c35b22ef1b7788f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:02:19.927447   72639 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 15:02:19.940830   72639 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.138
	I1014 15:02:19.940919   72639 kubeadm.go:1160] stopping kube-system containers ...
	I1014 15:02:19.940947   72639 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1014 15:02:19.941009   72639 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:02:19.983689   72639 cri.go:89] found id: ""
	I1014 15:02:19.983769   72639 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1014 15:02:20.007079   72639 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:02:20.023868   72639 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:02:20.023896   72639 kubeadm.go:157] found existing configuration files:
	
	I1014 15:02:20.023971   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:02:20.038661   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:02:20.038734   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:02:20.054357   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:02:20.068771   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:02:20.068843   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:02:20.081157   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:02:20.095416   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:02:20.095483   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:02:20.109099   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:02:20.120608   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:02:20.120680   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:02:20.133217   72639 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:02:20.145896   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:20.311840   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:21.472918   72639 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.161037865s)
	I1014 15:02:21.472953   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:21.739827   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:21.833423   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:21.931874   72639 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:02:21.931987   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:22.432595   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:22.932784   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:21.138446   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:23.636836   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:20.833532   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:20.833974   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:20.834000   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:20.833930   73776 retry.go:31] will retry after 1.936414638s: waiting for machine to come up
	I1014 15:02:22.771789   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:22.772183   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:22.772206   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:22.772148   73776 retry.go:31] will retry after 2.51581517s: waiting for machine to come up
	I1014 15:02:25.290082   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:25.290491   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:25.290518   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:25.290453   73776 retry.go:31] will retry after 3.279920525s: waiting for machine to come up
	I1014 15:02:21.420355   72390 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:21.420385   72390 pod_ready.go:82] duration metric: took 6.465669ms for pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.420398   72390 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.427723   72390 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:21.427747   72390 pod_ready.go:82] duration metric: took 7.340946ms for pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.427760   72390 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rh82t" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.433500   72390 pod_ready.go:93] pod "kube-proxy-rh82t" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:21.433526   72390 pod_ready.go:82] duration metric: took 5.757064ms for pod "kube-proxy-rh82t" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.433543   72390 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.802632   72390 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:21.802660   72390 pod_ready.go:82] duration metric: took 369.107697ms for pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.802672   72390 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:23.811046   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:26.308105   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:23.432728   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:23.932296   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:24.432079   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:24.932064   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:25.432201   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:25.932119   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:26.432423   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:26.932675   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:27.432633   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:27.932380   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:25.637287   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:28.137136   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:28.572901   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:28.573383   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:28.573421   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:28.573304   73776 retry.go:31] will retry after 5.283390724s: waiting for machine to come up
	I1014 15:02:28.310800   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:30.400310   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:28.432518   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:28.932871   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:29.432350   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:29.932761   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:30.432621   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:30.932873   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:31.432716   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:31.932364   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:32.432747   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:32.933039   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:30.637300   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:33.136858   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:33.858151   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.858626   71679 main.go:141] libmachine: (no-preload-813300) Found IP for machine: 192.168.61.13
	I1014 15:02:33.858660   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has current primary IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.858670   71679 main.go:141] libmachine: (no-preload-813300) Reserving static IP address...
	I1014 15:02:33.859001   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "no-preload-813300", mac: "52:54:00:ab:86:40", ip: "192.168.61.13"} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:33.859022   71679 main.go:141] libmachine: (no-preload-813300) Reserved static IP address: 192.168.61.13
	I1014 15:02:33.859040   71679 main.go:141] libmachine: (no-preload-813300) DBG | skip adding static IP to network mk-no-preload-813300 - found existing host DHCP lease matching {name: "no-preload-813300", mac: "52:54:00:ab:86:40", ip: "192.168.61.13"}
	I1014 15:02:33.859055   71679 main.go:141] libmachine: (no-preload-813300) DBG | Getting to WaitForSSH function...
	I1014 15:02:33.859065   71679 main.go:141] libmachine: (no-preload-813300) Waiting for SSH to be available...
	I1014 15:02:33.860949   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.861245   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:33.861287   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.861398   71679 main.go:141] libmachine: (no-preload-813300) DBG | Using SSH client type: external
	I1014 15:02:33.861424   71679 main.go:141] libmachine: (no-preload-813300) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa (-rw-------)
	I1014 15:02:33.861460   71679 main.go:141] libmachine: (no-preload-813300) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.13 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 15:02:33.861476   71679 main.go:141] libmachine: (no-preload-813300) DBG | About to run SSH command:
	I1014 15:02:33.861488   71679 main.go:141] libmachine: (no-preload-813300) DBG | exit 0
	I1014 15:02:33.991450   71679 main.go:141] libmachine: (no-preload-813300) DBG | SSH cmd err, output: <nil>: 
	I1014 15:02:33.991854   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetConfigRaw
	I1014 15:02:33.992623   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetIP
	I1014 15:02:33.995514   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.995884   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:33.995908   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.996225   71679 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/config.json ...
	I1014 15:02:33.996549   71679 machine.go:93] provisionDockerMachine start ...
	I1014 15:02:33.996572   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:33.996784   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:33.999385   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.999751   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:33.999789   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.999948   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.000135   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.000312   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.000455   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.000648   71679 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:34.000874   71679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.13 22 <nil> <nil>}
	I1014 15:02:34.000890   71679 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 15:02:34.114981   71679 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 15:02:34.115014   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetMachineName
	I1014 15:02:34.115245   71679 buildroot.go:166] provisioning hostname "no-preload-813300"
	I1014 15:02:34.115272   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetMachineName
	I1014 15:02:34.115421   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.117557   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.117890   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.117929   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.118027   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.118210   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.118365   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.118524   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.118720   71679 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:34.118913   71679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.13 22 <nil> <nil>}
	I1014 15:02:34.118932   71679 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-813300 && echo "no-preload-813300" | sudo tee /etc/hostname
	I1014 15:02:34.246092   71679 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-813300
	
	I1014 15:02:34.246149   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.248672   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.249095   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.249122   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.249331   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.249505   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.249687   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.249860   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.250061   71679 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:34.250272   71679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.13 22 <nil> <nil>}
	I1014 15:02:34.250297   71679 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-813300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-813300/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-813300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 15:02:34.373470   71679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 15:02:34.373512   71679 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 15:02:34.373576   71679 buildroot.go:174] setting up certificates
	I1014 15:02:34.373594   71679 provision.go:84] configureAuth start
	I1014 15:02:34.373613   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetMachineName
	I1014 15:02:34.373903   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetIP
	I1014 15:02:34.376697   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.376986   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.377009   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.377137   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.379469   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.379813   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.379838   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.379981   71679 provision.go:143] copyHostCerts
	I1014 15:02:34.380034   71679 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 15:02:34.380050   71679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 15:02:34.380106   71679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 15:02:34.380194   71679 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 15:02:34.380201   71679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 15:02:34.380223   71679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 15:02:34.380282   71679 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 15:02:34.380288   71679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 15:02:34.380305   71679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 15:02:34.380362   71679 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.no-preload-813300 san=[127.0.0.1 192.168.61.13 localhost minikube no-preload-813300]
	I1014 15:02:34.421281   71679 provision.go:177] copyRemoteCerts
	I1014 15:02:34.421331   71679 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 15:02:34.421353   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.423903   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.424219   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.424248   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.424471   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.424665   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.424807   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.424948   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:02:34.512847   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 15:02:34.539814   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1014 15:02:34.568946   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 15:02:34.593444   71679 provision.go:87] duration metric: took 219.83393ms to configureAuth
	I1014 15:02:34.593467   71679 buildroot.go:189] setting minikube options for container-runtime
	I1014 15:02:34.593661   71679 config.go:182] Loaded profile config "no-preload-813300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 15:02:34.593744   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.596317   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.596626   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.596659   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.596819   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.597008   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.597159   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.597295   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.597433   71679 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:34.597611   71679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.13 22 <nil> <nil>}
	I1014 15:02:34.597631   71679 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 15:02:34.837224   71679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 15:02:34.837244   71679 machine.go:96] duration metric: took 840.680679ms to provisionDockerMachine
	I1014 15:02:34.837256   71679 start.go:293] postStartSetup for "no-preload-813300" (driver="kvm2")
	I1014 15:02:34.837265   71679 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 15:02:34.837281   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:34.837593   71679 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 15:02:34.837625   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.840357   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.840677   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.840702   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.840845   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.841025   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.841193   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.841363   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:02:34.930754   71679 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 15:02:34.935428   71679 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 15:02:34.935457   71679 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 15:02:34.935541   71679 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 15:02:34.935659   71679 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 15:02:34.935795   71679 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 15:02:34.946363   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:02:34.973029   71679 start.go:296] duration metric: took 135.76066ms for postStartSetup
	I1014 15:02:34.973074   71679 fix.go:56] duration metric: took 23.72449375s for fixHost
	I1014 15:02:34.973098   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.975897   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.976211   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.976237   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.976487   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.976687   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.976813   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.976923   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.977075   71679 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:34.977294   71679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.13 22 <nil> <nil>}
	I1014 15:02:34.977309   71679 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 15:02:35.091556   71679 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728918155.078304162
	
	I1014 15:02:35.091581   71679 fix.go:216] guest clock: 1728918155.078304162
	I1014 15:02:35.091590   71679 fix.go:229] Guest: 2024-10-14 15:02:35.078304162 +0000 UTC Remote: 2024-10-14 15:02:34.973079478 +0000 UTC m=+359.485826316 (delta=105.224684ms)
	I1014 15:02:35.091610   71679 fix.go:200] guest clock delta is within tolerance: 105.224684ms
	I1014 15:02:35.091616   71679 start.go:83] releasing machines lock for "no-preload-813300", held for 23.843071366s
	I1014 15:02:35.091641   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:35.091899   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetIP
	I1014 15:02:35.094383   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:35.094712   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:35.094733   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:35.094910   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:35.095353   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:35.095534   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:35.095589   71679 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 15:02:35.095658   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:35.095750   71679 ssh_runner.go:195] Run: cat /version.json
	I1014 15:02:35.095773   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:35.098288   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:35.098316   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:35.098680   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:35.098713   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:35.098743   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:35.098795   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:35.098835   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:35.099003   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:35.099186   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:35.099198   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:35.099367   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:35.099371   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:02:35.099513   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:35.099728   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:02:35.179961   71679 ssh_runner.go:195] Run: systemctl --version
	I1014 15:02:35.205523   71679 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 15:02:35.350662   71679 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 15:02:35.356870   71679 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 15:02:35.356941   71679 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 15:02:35.374967   71679 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 15:02:35.374997   71679 start.go:495] detecting cgroup driver to use...
	I1014 15:02:35.375067   71679 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 15:02:35.393194   71679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 15:02:35.408295   71679 docker.go:217] disabling cri-docker service (if available) ...
	I1014 15:02:35.408362   71679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 15:02:35.423927   71679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 15:02:35.438753   71679 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 15:02:32.809221   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:34.811962   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:35.567539   71679 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 15:02:35.702830   71679 docker.go:233] disabling docker service ...
	I1014 15:02:35.702916   71679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 15:02:35.720822   71679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 15:02:35.735403   71679 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 15:02:35.880532   71679 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 15:02:36.003343   71679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 15:02:36.018230   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 15:02:36.037065   71679 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 15:02:36.037134   71679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.047820   71679 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 15:02:36.047880   71679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.058531   71679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.069760   71679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.081047   71679 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 15:02:36.092384   71679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.103241   71679 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.121771   71679 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.132886   71679 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 15:02:36.143239   71679 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 15:02:36.143308   71679 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 15:02:36.156582   71679 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 15:02:36.165955   71679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:02:36.283857   71679 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 15:02:36.388165   71679 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 15:02:36.388243   71679 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 15:02:36.393324   71679 start.go:563] Will wait 60s for crictl version
	I1014 15:02:36.393378   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:36.397236   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 15:02:36.444749   71679 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 15:02:36.444839   71679 ssh_runner.go:195] Run: crio --version
	I1014 15:02:36.474831   71679 ssh_runner.go:195] Run: crio --version
	I1014 15:02:36.520531   71679 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1014 15:02:33.432474   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:33.932719   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:34.432581   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:34.932863   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:35.432886   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:35.932915   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:36.432852   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:36.932367   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:37.432894   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:37.933035   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:35.637235   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:38.137613   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:36.521865   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetIP
	I1014 15:02:36.524566   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:36.524956   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:36.524984   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:36.525213   71679 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1014 15:02:36.529579   71679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:02:36.542554   71679 kubeadm.go:883] updating cluster {Name:no-preload-813300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.13 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 15:02:36.542701   71679 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 15:02:36.542737   71679 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:02:36.585681   71679 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1014 15:02:36.585719   71679 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1014 15:02:36.585806   71679 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:36.585838   71679 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:36.585865   71679 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:36.585886   71679 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1014 15:02:36.585925   71679 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:36.585814   71679 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:36.585954   71679 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:36.585843   71679 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:36.587263   71679 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:36.587290   71679 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:36.587289   71679 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:36.587289   71679 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:36.587289   71679 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:36.587326   71679 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:36.587289   71679 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:36.587274   71679 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1014 15:02:36.737070   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:36.750146   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:36.750401   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:36.767605   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1014 15:02:36.775005   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:36.797223   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:36.833657   71679 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I1014 15:02:36.833708   71679 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:36.833754   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:36.833875   71679 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I1014 15:02:36.833896   71679 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:36.833929   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:36.850009   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:36.911675   71679 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1014 15:02:36.911720   71679 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:36.911779   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:36.973319   71679 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1014 15:02:36.973354   71679 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:36.973383   71679 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I1014 15:02:36.973394   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:36.973414   71679 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:36.973453   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:36.973456   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:36.973519   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:36.973619   71679 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I1014 15:02:36.973640   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:36.973644   71679 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:36.973671   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:37.044689   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:37.044739   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:37.044815   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:37.044860   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:37.044907   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:37.044947   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:37.166670   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:37.166737   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:37.166794   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:37.166908   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:37.166924   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:37.272802   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:37.272835   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:37.287078   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I1014 15:02:37.287167   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:37.287207   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1014 15:02:37.287240   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1014 15:02:37.287293   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1014 15:02:37.287320   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1014 15:02:37.287367   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1014 15:02:37.354510   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:37.354621   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1014 15:02:37.354659   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I1014 15:02:37.354676   71679 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1014 15:02:37.354700   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1014 15:02:37.354711   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1014 15:02:37.354719   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1014 15:02:37.354790   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I1014 15:02:37.354812   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I1014 15:02:37.354865   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I1014 15:02:37.532403   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:39.443614   71679 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1: (2.089069189s)
	I1014 15:02:39.443676   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I1014 15:02:39.443766   71679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.089027703s)
	I1014 15:02:39.443790   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I1014 15:02:39.443775   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1014 15:02:39.443813   71679 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1014 15:02:39.443833   71679 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.089105476s)
	I1014 15:02:39.443854   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1014 15:02:39.443861   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1014 15:02:39.443911   71679 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.089031069s)
	I1014 15:02:39.443933   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I1014 15:02:39.443986   71679 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.911557292s)
	I1014 15:02:39.444029   71679 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1014 15:02:39.444057   71679 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:39.444111   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:37.309522   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:39.809526   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:38.432551   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:38.932486   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:39.432591   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:39.932694   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:40.432065   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:40.932044   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:41.432313   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:41.933055   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:42.432453   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:42.932258   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:40.137656   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:42.637462   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:41.514958   71679 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.071133048s)
	I1014 15:02:41.514987   71679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.071109487s)
	I1014 15:02:41.515016   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1014 15:02:41.515041   71679 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1014 15:02:41.515046   71679 ssh_runner.go:235] Completed: which crictl: (2.070916553s)
	I1014 15:02:41.514994   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I1014 15:02:41.515093   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1014 15:02:41.515105   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:41.569878   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:43.401013   71679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.885889648s)
	I1014 15:02:43.401053   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I1014 15:02:43.401068   71679 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.831164682s)
	I1014 15:02:43.401082   71679 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1014 15:02:43.401131   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:43.401139   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1014 15:02:41.809862   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:43.810054   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:45.810567   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:43.432054   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:43.932139   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:44.432261   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:44.932517   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:45.432959   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:45.933103   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:46.432845   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:46.932825   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:47.432059   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:47.932745   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:44.639020   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:47.136927   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:49.137423   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:46.799144   71679 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.397987929s)
	I1014 15:02:46.799198   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1014 15:02:46.799201   71679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.398044957s)
	I1014 15:02:46.799222   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1014 15:02:46.799249   71679 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I1014 15:02:46.799295   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1014 15:02:46.799296   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I1014 15:02:46.804398   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1014 15:02:48.971377   71679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.171989764s)
	I1014 15:02:48.971409   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I1014 15:02:48.971436   71679 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1014 15:02:48.971481   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1014 15:02:48.309980   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:50.311361   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:48.432869   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:48.932514   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:49.432754   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:49.932514   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:50.432199   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:50.932861   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:51.432404   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:51.932097   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:52.432569   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:52.933078   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:51.141481   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:53.638306   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:50.935341   71679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.963834471s)
	I1014 15:02:50.935373   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I1014 15:02:50.935401   71679 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1014 15:02:50.935452   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1014 15:02:51.683211   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1014 15:02:51.683268   71679 cache_images.go:123] Successfully loaded all cached images
	I1014 15:02:51.683277   71679 cache_images.go:92] duration metric: took 15.097525447s to LoadCachedImages
	I1014 15:02:51.683293   71679 kubeadm.go:934] updating node { 192.168.61.13 8443 v1.31.1 crio true true} ...
	I1014 15:02:51.683441   71679 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-813300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 15:02:51.683525   71679 ssh_runner.go:195] Run: crio config
	I1014 15:02:51.737769   71679 cni.go:84] Creating CNI manager for ""
	I1014 15:02:51.737790   71679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:02:51.737799   71679 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 15:02:51.737818   71679 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.13 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-813300 NodeName:no-preload-813300 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 15:02:51.737955   71679 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-813300"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.13"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.13"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 15:02:51.738019   71679 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 15:02:51.749175   71679 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 15:02:51.749241   71679 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 15:02:51.759120   71679 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1014 15:02:51.777293   71679 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 15:02:51.795073   71679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I1014 15:02:51.815094   71679 ssh_runner.go:195] Run: grep 192.168.61.13	control-plane.minikube.internal$ /etc/hosts
	I1014 15:02:51.819087   71679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:02:51.831806   71679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:02:51.953191   71679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:02:51.972342   71679 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300 for IP: 192.168.61.13
	I1014 15:02:51.972362   71679 certs.go:194] generating shared ca certs ...
	I1014 15:02:51.972379   71679 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:02:51.972534   71679 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 15:02:51.972583   71679 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 15:02:51.972597   71679 certs.go:256] generating profile certs ...
	I1014 15:02:51.972732   71679 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/client.key
	I1014 15:02:51.972822   71679 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/apiserver.key.4d535e2d
	I1014 15:02:51.972885   71679 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/proxy-client.key
	I1014 15:02:51.973064   71679 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 15:02:51.973102   71679 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 15:02:51.973111   71679 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 15:02:51.973151   71679 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 15:02:51.973180   71679 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 15:02:51.973203   71679 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 15:02:51.973260   71679 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:02:51.974077   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 15:02:52.019451   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 15:02:52.048323   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 15:02:52.086241   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 15:02:52.129342   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 15:02:52.157243   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 15:02:52.189093   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 15:02:52.214980   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 15:02:52.241595   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 15:02:52.270329   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 15:02:52.295153   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 15:02:52.321303   71679 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 15:02:52.339181   71679 ssh_runner.go:195] Run: openssl version
	I1014 15:02:52.345152   71679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 15:02:52.357167   71679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:02:52.362387   71679 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:02:52.362442   71679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:02:52.369003   71679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 15:02:52.380917   71679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 15:02:52.392884   71679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 15:02:52.397876   71679 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 15:02:52.397942   71679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 15:02:52.404038   71679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 15:02:52.415841   71679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 15:02:52.426973   71679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 15:02:52.431848   71679 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 15:02:52.431914   71679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 15:02:52.439851   71679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 15:02:52.455014   71679 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 15:02:52.460088   71679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 15:02:52.466495   71679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 15:02:52.472659   71679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 15:02:52.483107   71679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 15:02:52.491272   71679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 15:02:52.497692   71679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 15:02:52.504352   71679 kubeadm.go:392] StartCluster: {Name:no-preload-813300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.13 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 15:02:52.504456   71679 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 15:02:52.504502   71679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:02:52.544010   71679 cri.go:89] found id: ""
	I1014 15:02:52.544074   71679 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 15:02:52.554296   71679 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1014 15:02:52.554314   71679 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1014 15:02:52.554364   71679 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 15:02:52.564193   71679 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 15:02:52.565367   71679 kubeconfig.go:125] found "no-preload-813300" server: "https://192.168.61.13:8443"
	I1014 15:02:52.567519   71679 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 15:02:52.577268   71679 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.13
	I1014 15:02:52.577296   71679 kubeadm.go:1160] stopping kube-system containers ...
	I1014 15:02:52.577305   71679 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1014 15:02:52.577343   71679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:02:52.614462   71679 cri.go:89] found id: ""
	I1014 15:02:52.614551   71679 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1014 15:02:52.631835   71679 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:02:52.642314   71679 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:02:52.642334   71679 kubeadm.go:157] found existing configuration files:
	
	I1014 15:02:52.642378   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:02:52.652036   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:02:52.652114   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:02:52.662263   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:02:52.672145   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:02:52.672214   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:02:52.682085   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:02:52.691628   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:02:52.691706   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:02:52.701314   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:02:52.711232   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:02:52.711291   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:02:52.722480   71679 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:02:52.733359   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:52.849407   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:53.647528   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:53.863718   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:53.938091   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:54.046445   71679 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:02:54.046544   71679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:54.546715   71679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:55.047285   71679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:55.062239   71679 api_server.go:72] duration metric: took 1.015804644s to wait for apiserver process to appear ...
	I1014 15:02:55.062265   71679 api_server.go:88] waiting for apiserver healthz status ...
	I1014 15:02:55.062296   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:02:55.062806   71679 api_server.go:269] stopped: https://192.168.61.13:8443/healthz: Get "https://192.168.61.13:8443/healthz": dial tcp 192.168.61.13:8443: connect: connection refused
	I1014 15:02:52.811186   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:55.309901   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:53.432335   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:53.932860   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:54.433105   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:54.933031   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:55.432058   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:55.932422   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:56.432618   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:56.932727   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:57.432265   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:57.932733   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:56.136357   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:58.136956   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:55.562748   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:02:58.274557   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 15:02:58.274587   71679 api_server.go:103] status: https://192.168.61.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 15:02:58.274625   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:02:58.296655   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 15:02:58.296682   71679 api_server.go:103] status: https://192.168.61.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 15:02:58.563094   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:02:58.567676   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:58.567717   71679 api_server.go:103] status: https://192.168.61.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:02:59.063266   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:02:59.067656   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:59.067697   71679 api_server.go:103] status: https://192.168.61.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:02:59.563300   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:02:59.569667   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:59.569699   71679 api_server.go:103] status: https://192.168.61.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:03:00.063305   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:03:00.067834   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 200:
	ok
	I1014 15:03:00.079522   71679 api_server.go:141] control plane version: v1.31.1
	I1014 15:03:00.079555   71679 api_server.go:131] duration metric: took 5.017283463s to wait for apiserver health ...
	I1014 15:03:00.079565   71679 cni.go:84] Creating CNI manager for ""
	I1014 15:03:00.079572   71679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:03:00.081793   71679 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 15:03:00.083132   71679 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 15:03:00.095329   71679 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 15:03:00.114972   71679 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 15:03:00.148816   71679 system_pods.go:59] 8 kube-system pods found
	I1014 15:03:00.148849   71679 system_pods.go:61] "coredns-7c65d6cfc9-5cft7" [43bb92da-74e8-4430-a889-3c23ed3fef67] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 15:03:00.148859   71679 system_pods.go:61] "etcd-no-preload-813300" [c3e9137c-855e-49e2-8891-8df57707f75a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 15:03:00.148867   71679 system_pods.go:61] "kube-apiserver-no-preload-813300" [683c2d48-6c84-470c-96e5-0706a1884ee7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 15:03:00.148872   71679 system_pods.go:61] "kube-controller-manager-no-preload-813300" [405991ef-9b48-4770-ba31-a213f0eae077] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 15:03:00.148882   71679 system_pods.go:61] "kube-proxy-jd4t4" [6c5c517b-855e-440c-976e-9c5e5d0710f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1014 15:03:00.148887   71679 system_pods.go:61] "kube-scheduler-no-preload-813300" [e76569e6-74c8-44dd-b283-a82072226686] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 15:03:00.148892   71679 system_pods.go:61] "metrics-server-6867b74b74-br4tl" [5b3425c6-9847-447d-a9ab-076c7cc1634f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:03:00.148896   71679 system_pods.go:61] "storage-provisioner" [2c52e790-afa9-4131-8e28-801eb3f822d5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 15:03:00.148906   71679 system_pods.go:74] duration metric: took 33.908487ms to wait for pod list to return data ...
	I1014 15:03:00.148918   71679 node_conditions.go:102] verifying NodePressure condition ...
	I1014 15:03:00.161000   71679 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 15:03:00.161029   71679 node_conditions.go:123] node cpu capacity is 2
	I1014 15:03:00.161042   71679 node_conditions.go:105] duration metric: took 12.118841ms to run NodePressure ...
	I1014 15:03:00.161067   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:03:00.510702   71679 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1014 15:03:00.515692   71679 kubeadm.go:739] kubelet initialised
	I1014 15:03:00.515715   71679 kubeadm.go:740] duration metric: took 4.986873ms waiting for restarted kubelet to initialise ...
	I1014 15:03:00.515724   71679 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:03:00.521483   71679 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-5cft7" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:57.810518   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:59.811287   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:58.432774   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:58.932666   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:59.433020   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:59.932671   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:00.432717   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:00.932917   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:01.432735   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:01.932668   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:02.432260   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:02.932075   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:00.137257   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:02.137876   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:02.528402   71679 pod_ready.go:103] pod "coredns-7c65d6cfc9-5cft7" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:04.530210   71679 pod_ready.go:93] pod "coredns-7c65d6cfc9-5cft7" in "kube-system" namespace has status "Ready":"True"
	I1014 15:03:04.530241   71679 pod_ready.go:82] duration metric: took 4.008725187s for pod "coredns-7c65d6cfc9-5cft7" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:04.530254   71679 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:02.309134   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:04.311421   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:03.432139   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:03.932241   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:04.432421   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:04.932869   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:05.432972   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:05.933010   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:06.432409   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:06.932778   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:07.432067   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:07.932749   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:04.636760   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:07.136410   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:09.137483   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:06.537318   71679 pod_ready.go:103] pod "etcd-no-preload-813300" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:09.037462   71679 pod_ready.go:103] pod "etcd-no-preload-813300" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:06.810244   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:08.810932   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:10.813334   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:08.432529   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:08.932034   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:09.432042   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:09.933054   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:10.432938   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:10.932661   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:11.432392   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:11.932068   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:12.432066   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:12.932122   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:11.636654   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:13.637819   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:10.536905   71679 pod_ready.go:93] pod "etcd-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:03:10.536932   71679 pod_ready.go:82] duration metric: took 6.006669219s for pod "etcd-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:10.536945   71679 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:12.551283   71679 pod_ready.go:103] pod "kube-apiserver-no-preload-813300" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:13.044142   71679 pod_ready.go:93] pod "kube-apiserver-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:03:13.044166   71679 pod_ready.go:82] duration metric: took 2.507213726s for pod "kube-apiserver-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.044176   71679 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.049176   71679 pod_ready.go:93] pod "kube-controller-manager-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:03:13.049196   71679 pod_ready.go:82] duration metric: took 5.01377ms for pod "kube-controller-manager-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.049206   71679 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-jd4t4" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.053623   71679 pod_ready.go:93] pod "kube-proxy-jd4t4" in "kube-system" namespace has status "Ready":"True"
	I1014 15:03:13.053646   71679 pod_ready.go:82] duration metric: took 4.434586ms for pod "kube-proxy-jd4t4" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.053654   71679 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.559610   71679 pod_ready.go:93] pod "kube-scheduler-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:03:13.559632   71679 pod_ready.go:82] duration metric: took 505.972722ms for pod "kube-scheduler-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.559642   71679 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.309520   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:15.309622   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:13.432556   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:13.932427   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:14.432053   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:14.932460   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:15.432714   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:15.933071   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:16.432567   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:16.932414   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:17.432985   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:17.932960   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:16.136599   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:18.137964   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:15.566234   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:17.567065   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:20.066221   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:17.309837   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:19.310194   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:18.433026   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:18.932015   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:19.432042   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:19.932030   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:20.433050   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:20.932658   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:21.432667   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:21.933045   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:21.933127   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:21.973476   72639 cri.go:89] found id: ""
	I1014 15:03:21.973507   72639 logs.go:282] 0 containers: []
	W1014 15:03:21.973517   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:21.973523   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:21.973584   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:22.011700   72639 cri.go:89] found id: ""
	I1014 15:03:22.011732   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.011742   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:22.011748   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:22.011814   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:22.047721   72639 cri.go:89] found id: ""
	I1014 15:03:22.047744   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.047752   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:22.047762   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:22.047814   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:22.091618   72639 cri.go:89] found id: ""
	I1014 15:03:22.091644   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.091652   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:22.091657   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:22.091706   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:22.129997   72639 cri.go:89] found id: ""
	I1014 15:03:22.130036   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.130047   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:22.130055   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:22.130114   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:22.168024   72639 cri.go:89] found id: ""
	I1014 15:03:22.168053   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.168061   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:22.168067   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:22.168136   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:22.202633   72639 cri.go:89] found id: ""
	I1014 15:03:22.202660   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.202670   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:22.202677   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:22.202739   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:22.238224   72639 cri.go:89] found id: ""
	I1014 15:03:22.238251   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.238259   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:22.238267   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:22.238278   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:22.251940   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:22.251991   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:22.379777   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:22.379799   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:22.379814   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:22.456468   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:22.456507   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:22.495404   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:22.495433   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:20.636995   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:22.637141   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:22.066371   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:24.566023   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:21.809579   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:24.309010   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:25.048061   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:25.068586   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:25.068658   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:25.121199   72639 cri.go:89] found id: ""
	I1014 15:03:25.121228   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.121237   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:25.121243   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:25.121303   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:25.174705   72639 cri.go:89] found id: ""
	I1014 15:03:25.174738   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.174749   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:25.174757   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:25.174815   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:25.236972   72639 cri.go:89] found id: ""
	I1014 15:03:25.237002   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.237013   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:25.237020   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:25.237077   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:25.276443   72639 cri.go:89] found id: ""
	I1014 15:03:25.276473   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.276483   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:25.276489   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:25.276541   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:25.314573   72639 cri.go:89] found id: ""
	I1014 15:03:25.314623   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.314636   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:25.314645   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:25.314708   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:25.357489   72639 cri.go:89] found id: ""
	I1014 15:03:25.357515   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.357525   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:25.357533   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:25.357595   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:25.397504   72639 cri.go:89] found id: ""
	I1014 15:03:25.397527   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.397538   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:25.397546   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:25.397597   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:25.433139   72639 cri.go:89] found id: ""
	I1014 15:03:25.433162   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.433170   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:25.433179   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:25.433193   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:25.448088   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:25.448121   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:25.522377   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:25.522401   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:25.522415   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:25.595505   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:25.595538   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:25.643478   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:25.643511   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:25.137557   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:27.637096   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:27.067425   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:29.565568   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:26.809419   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:29.309193   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:31.310234   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:28.195236   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:28.208612   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:28.208686   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:28.248538   72639 cri.go:89] found id: ""
	I1014 15:03:28.248569   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.248581   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:28.248588   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:28.248652   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:28.286103   72639 cri.go:89] found id: ""
	I1014 15:03:28.286131   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.286143   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:28.286149   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:28.286209   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:28.321335   72639 cri.go:89] found id: ""
	I1014 15:03:28.321371   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.321383   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:28.321391   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:28.321453   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:28.358538   72639 cri.go:89] found id: ""
	I1014 15:03:28.358571   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.358581   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:28.358588   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:28.358661   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:28.397058   72639 cri.go:89] found id: ""
	I1014 15:03:28.397087   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.397099   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:28.397106   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:28.397175   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:28.434010   72639 cri.go:89] found id: ""
	I1014 15:03:28.434032   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.434040   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:28.434045   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:28.434095   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:28.474646   72639 cri.go:89] found id: ""
	I1014 15:03:28.474672   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.474681   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:28.474687   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:28.474736   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:28.512833   72639 cri.go:89] found id: ""
	I1014 15:03:28.512860   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.512871   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:28.512882   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:28.512894   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:28.526233   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:28.526262   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:28.601366   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:28.601393   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:28.601416   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:28.690261   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:28.690300   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:28.734134   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:28.734158   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:31.290184   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:31.303493   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:31.303558   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:31.341521   72639 cri.go:89] found id: ""
	I1014 15:03:31.341552   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.341563   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:31.341569   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:31.341627   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:31.378811   72639 cri.go:89] found id: ""
	I1014 15:03:31.378839   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.378851   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:31.378859   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:31.378922   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:31.416282   72639 cri.go:89] found id: ""
	I1014 15:03:31.416310   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.416321   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:31.416328   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:31.416392   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:31.456089   72639 cri.go:89] found id: ""
	I1014 15:03:31.456123   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.456134   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:31.456142   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:31.456202   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:31.496429   72639 cri.go:89] found id: ""
	I1014 15:03:31.496468   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.496478   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:31.496485   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:31.496548   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:31.535226   72639 cri.go:89] found id: ""
	I1014 15:03:31.535248   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.535256   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:31.535262   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:31.535321   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:31.572580   72639 cri.go:89] found id: ""
	I1014 15:03:31.572608   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.572623   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:31.572631   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:31.572691   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:31.606736   72639 cri.go:89] found id: ""
	I1014 15:03:31.606759   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.606766   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:31.606774   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:31.606785   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:31.646048   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:31.646078   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:31.696818   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:31.696851   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:31.710099   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:31.710128   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:31.787756   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:31.787783   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:31.787798   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:30.136436   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:32.138037   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:34.139660   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:31.566034   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:33.567029   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:33.809434   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:36.309487   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:34.369392   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:34.383263   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:34.383344   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:34.417763   72639 cri.go:89] found id: ""
	I1014 15:03:34.417797   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.417809   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:34.417816   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:34.417890   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:34.453361   72639 cri.go:89] found id: ""
	I1014 15:03:34.453391   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.453402   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:34.453409   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:34.453488   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:34.490878   72639 cri.go:89] found id: ""
	I1014 15:03:34.490905   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.490913   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:34.490919   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:34.490980   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:34.527554   72639 cri.go:89] found id: ""
	I1014 15:03:34.527584   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.527595   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:34.527603   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:34.527655   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:34.564813   72639 cri.go:89] found id: ""
	I1014 15:03:34.564841   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.564851   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:34.564857   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:34.564903   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:34.599899   72639 cri.go:89] found id: ""
	I1014 15:03:34.599930   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.599942   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:34.599949   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:34.600019   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:34.641686   72639 cri.go:89] found id: ""
	I1014 15:03:34.641717   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.641728   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:34.641735   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:34.641794   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:34.681154   72639 cri.go:89] found id: ""
	I1014 15:03:34.681184   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.681195   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:34.681205   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:34.681218   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:34.719638   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:34.719672   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:34.771687   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:34.771722   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:34.785943   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:34.785972   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:34.861821   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:34.861861   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:34.861875   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:37.441605   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:37.456763   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:37.456828   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:37.494176   72639 cri.go:89] found id: ""
	I1014 15:03:37.494202   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.494210   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:37.494216   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:37.494268   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:37.538802   72639 cri.go:89] found id: ""
	I1014 15:03:37.538834   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.538846   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:37.538853   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:37.538913   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:37.586282   72639 cri.go:89] found id: ""
	I1014 15:03:37.586312   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.586322   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:37.586328   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:37.586397   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:37.632673   72639 cri.go:89] found id: ""
	I1014 15:03:37.632698   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.632709   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:37.632715   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:37.632771   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:37.673340   72639 cri.go:89] found id: ""
	I1014 15:03:37.673364   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.673372   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:37.673377   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:37.673427   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:37.718725   72639 cri.go:89] found id: ""
	I1014 15:03:37.718750   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.718758   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:37.718764   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:37.718807   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:37.760560   72639 cri.go:89] found id: ""
	I1014 15:03:37.760587   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.760597   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:37.760605   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:37.760665   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:37.800912   72639 cri.go:89] found id: ""
	I1014 15:03:37.800941   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.800949   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:37.800957   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:37.800968   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:37.815338   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:37.815363   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:37.893018   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:37.893050   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:37.893067   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:37.978315   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:37.978349   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:36.637635   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:39.136295   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:36.065915   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:38.066310   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:38.810020   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:40.810460   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:38.019760   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:38.019788   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:40.570918   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:40.586058   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:40.586122   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:40.623753   72639 cri.go:89] found id: ""
	I1014 15:03:40.623784   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.623795   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:40.623801   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:40.623862   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:40.663909   72639 cri.go:89] found id: ""
	I1014 15:03:40.663937   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.663946   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:40.663953   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:40.664008   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:40.698572   72639 cri.go:89] found id: ""
	I1014 15:03:40.698615   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.698626   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:40.698633   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:40.698683   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:40.734882   72639 cri.go:89] found id: ""
	I1014 15:03:40.734907   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.734914   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:40.734920   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:40.734976   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:40.768429   72639 cri.go:89] found id: ""
	I1014 15:03:40.768455   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.768462   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:40.768468   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:40.768527   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:40.803429   72639 cri.go:89] found id: ""
	I1014 15:03:40.803456   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.803466   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:40.803474   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:40.803535   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:40.842854   72639 cri.go:89] found id: ""
	I1014 15:03:40.842883   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.842905   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:40.842913   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:40.842988   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:40.879638   72639 cri.go:89] found id: ""
	I1014 15:03:40.879661   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.879669   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:40.879677   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:40.879687   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:40.924949   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:40.924983   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:40.976271   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:40.976304   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:40.991492   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:40.991520   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:41.071418   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:41.071439   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:41.071453   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:41.136877   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:43.637356   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:40.566353   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:43.065982   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:45.066405   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:43.310188   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:45.811549   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:43.652387   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:43.666239   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:43.666317   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:43.705726   72639 cri.go:89] found id: ""
	I1014 15:03:43.705752   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.705761   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:43.705766   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:43.705814   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:43.745648   72639 cri.go:89] found id: ""
	I1014 15:03:43.745672   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.745680   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:43.745685   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:43.745731   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:43.783032   72639 cri.go:89] found id: ""
	I1014 15:03:43.783055   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.783063   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:43.783068   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:43.783115   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:43.820582   72639 cri.go:89] found id: ""
	I1014 15:03:43.820607   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.820617   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:43.820623   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:43.820669   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:43.862312   72639 cri.go:89] found id: ""
	I1014 15:03:43.862338   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.862348   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:43.862353   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:43.862404   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:43.898338   72639 cri.go:89] found id: ""
	I1014 15:03:43.898368   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.898379   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:43.898388   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:43.898448   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:43.934682   72639 cri.go:89] found id: ""
	I1014 15:03:43.934709   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.934719   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:43.934726   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:43.934781   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:43.970209   72639 cri.go:89] found id: ""
	I1014 15:03:43.970237   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.970247   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:43.970257   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:43.970269   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:44.024791   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:44.024832   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:44.038431   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:44.038457   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:44.117255   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:44.117291   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:44.117308   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:44.199397   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:44.199436   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:46.739819   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:46.755553   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:46.755625   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:46.797225   72639 cri.go:89] found id: ""
	I1014 15:03:46.797253   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.797265   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:46.797272   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:46.797335   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:46.832999   72639 cri.go:89] found id: ""
	I1014 15:03:46.833025   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.833036   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:46.833043   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:46.833103   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:46.872711   72639 cri.go:89] found id: ""
	I1014 15:03:46.872733   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.872741   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:46.872746   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:46.872795   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:46.909945   72639 cri.go:89] found id: ""
	I1014 15:03:46.909968   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.909977   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:46.909985   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:46.910046   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:46.946036   72639 cri.go:89] found id: ""
	I1014 15:03:46.946067   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.946080   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:46.946087   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:46.946141   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:46.981772   72639 cri.go:89] found id: ""
	I1014 15:03:46.981806   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.981819   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:46.981828   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:46.981896   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:47.022761   72639 cri.go:89] found id: ""
	I1014 15:03:47.022790   72639 logs.go:282] 0 containers: []
	W1014 15:03:47.022800   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:47.022807   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:47.022869   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:47.057368   72639 cri.go:89] found id: ""
	I1014 15:03:47.057392   72639 logs.go:282] 0 containers: []
	W1014 15:03:47.057400   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:47.057408   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:47.057418   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:47.134369   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:47.134408   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:47.179550   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:47.179586   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:47.233317   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:47.233355   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:47.247598   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:47.247629   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:47.321309   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:45.637760   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:48.136826   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:47.067003   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:49.565410   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:48.309520   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:50.812241   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:49.821955   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:49.836907   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:49.836975   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:49.876651   72639 cri.go:89] found id: ""
	I1014 15:03:49.876682   72639 logs.go:282] 0 containers: []
	W1014 15:03:49.876694   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:49.876713   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:49.876781   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:49.913440   72639 cri.go:89] found id: ""
	I1014 15:03:49.913464   72639 logs.go:282] 0 containers: []
	W1014 15:03:49.913473   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:49.913479   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:49.913535   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:49.949352   72639 cri.go:89] found id: ""
	I1014 15:03:49.949383   72639 logs.go:282] 0 containers: []
	W1014 15:03:49.949395   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:49.949402   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:49.949463   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:49.984599   72639 cri.go:89] found id: ""
	I1014 15:03:49.984629   72639 logs.go:282] 0 containers: []
	W1014 15:03:49.984641   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:49.984649   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:49.984709   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:50.028049   72639 cri.go:89] found id: ""
	I1014 15:03:50.028072   72639 logs.go:282] 0 containers: []
	W1014 15:03:50.028083   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:50.028090   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:50.028166   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:50.062272   72639 cri.go:89] found id: ""
	I1014 15:03:50.062294   72639 logs.go:282] 0 containers: []
	W1014 15:03:50.062302   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:50.062308   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:50.062358   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:50.099722   72639 cri.go:89] found id: ""
	I1014 15:03:50.099750   72639 logs.go:282] 0 containers: []
	W1014 15:03:50.099762   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:50.099769   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:50.099830   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:50.139984   72639 cri.go:89] found id: ""
	I1014 15:03:50.140005   72639 logs.go:282] 0 containers: []
	W1014 15:03:50.140013   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:50.140020   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:50.140032   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:50.218467   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:50.218500   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:50.260600   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:50.260635   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:50.313725   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:50.313757   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:50.328431   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:50.328462   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:50.401334   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:52.901787   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:52.917836   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:52.917902   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:52.955387   72639 cri.go:89] found id: ""
	I1014 15:03:52.955418   72639 logs.go:282] 0 containers: []
	W1014 15:03:52.955431   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:52.955440   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:52.955504   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:52.990890   72639 cri.go:89] found id: ""
	I1014 15:03:52.990924   72639 logs.go:282] 0 containers: []
	W1014 15:03:52.990936   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:52.990945   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:52.991004   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:50.636581   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:53.137639   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:51.566403   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:54.066690   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:53.310174   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:55.809402   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:53.032344   72639 cri.go:89] found id: ""
	I1014 15:03:53.032374   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.032384   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:53.032390   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:53.032458   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:53.073501   72639 cri.go:89] found id: ""
	I1014 15:03:53.073527   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.073537   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:53.073544   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:53.073602   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:53.114273   72639 cri.go:89] found id: ""
	I1014 15:03:53.114307   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.114316   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:53.114334   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:53.114389   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:53.155448   72639 cri.go:89] found id: ""
	I1014 15:03:53.155475   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.155484   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:53.155490   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:53.155539   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:53.191304   72639 cri.go:89] found id: ""
	I1014 15:03:53.191338   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.191350   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:53.191357   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:53.191438   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:53.224664   72639 cri.go:89] found id: ""
	I1014 15:03:53.224691   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.224702   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:53.224727   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:53.224744   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:53.275751   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:53.275786   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:53.289275   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:53.289303   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:53.369828   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:53.369855   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:53.369871   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:53.457248   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:53.457285   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:56.003384   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:56.017722   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:56.017782   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:56.056644   72639 cri.go:89] found id: ""
	I1014 15:03:56.056675   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.056686   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:56.056694   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:56.056757   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:56.094482   72639 cri.go:89] found id: ""
	I1014 15:03:56.094507   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.094517   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:56.094524   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:56.094583   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:56.129884   72639 cri.go:89] found id: ""
	I1014 15:03:56.129913   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.129921   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:56.129926   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:56.129974   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:56.167171   72639 cri.go:89] found id: ""
	I1014 15:03:56.167198   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.167206   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:56.167211   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:56.167264   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:56.204400   72639 cri.go:89] found id: ""
	I1014 15:03:56.204433   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.204442   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:56.204447   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:56.204494   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:56.240407   72639 cri.go:89] found id: ""
	I1014 15:03:56.240437   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.240448   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:56.240456   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:56.240517   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:56.277653   72639 cri.go:89] found id: ""
	I1014 15:03:56.277679   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.277687   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:56.277693   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:56.277738   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:56.313423   72639 cri.go:89] found id: ""
	I1014 15:03:56.313451   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.313459   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:56.313468   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:56.313480   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:56.368094   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:56.368133   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:56.382563   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:56.382621   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:56.455106   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:56.455130   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:56.455144   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:56.532288   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:56.532329   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:55.636007   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:57.637196   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:56.566763   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:59.066227   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:58.309184   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:00.309370   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:59.072469   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:59.089024   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:59.089094   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:59.130798   72639 cri.go:89] found id: ""
	I1014 15:03:59.130829   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.130840   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:59.130848   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:59.130908   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:59.167828   72639 cri.go:89] found id: ""
	I1014 15:03:59.167854   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.167864   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:59.167871   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:59.167932   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:59.223482   72639 cri.go:89] found id: ""
	I1014 15:03:59.223509   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.223520   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:59.223528   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:59.223590   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:59.261186   72639 cri.go:89] found id: ""
	I1014 15:03:59.261231   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.261243   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:59.261251   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:59.261314   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:59.296924   72639 cri.go:89] found id: ""
	I1014 15:03:59.296985   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.297000   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:59.297008   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:59.297084   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:59.333891   72639 cri.go:89] found id: ""
	I1014 15:03:59.333915   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.333923   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:59.333929   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:59.333991   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:59.374106   72639 cri.go:89] found id: ""
	I1014 15:03:59.374134   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.374143   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:59.374150   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:59.374222   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:59.412256   72639 cri.go:89] found id: ""
	I1014 15:03:59.412283   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.412291   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:59.412298   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:59.412308   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:59.492869   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:59.492904   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:59.492923   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:59.576441   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:59.576473   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:59.618638   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:59.618668   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:59.671295   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:59.671331   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:02.184689   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:02.197763   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:02.197833   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:02.231709   72639 cri.go:89] found id: ""
	I1014 15:04:02.231734   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.231746   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:02.231753   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:02.231815   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:02.269259   72639 cri.go:89] found id: ""
	I1014 15:04:02.269291   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.269303   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:02.269311   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:02.269390   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:02.305926   72639 cri.go:89] found id: ""
	I1014 15:04:02.305956   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.305967   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:02.305975   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:02.306034   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:02.349516   72639 cri.go:89] found id: ""
	I1014 15:04:02.349544   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.349557   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:02.349563   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:02.349622   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:02.388334   72639 cri.go:89] found id: ""
	I1014 15:04:02.388361   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.388371   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:02.388376   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:02.388428   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:02.422742   72639 cri.go:89] found id: ""
	I1014 15:04:02.422770   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.422781   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:02.422789   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:02.422850   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:02.463686   72639 cri.go:89] found id: ""
	I1014 15:04:02.463710   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.463718   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:02.463724   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:02.463770   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:02.498352   72639 cri.go:89] found id: ""
	I1014 15:04:02.498383   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.498394   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:02.498404   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:02.498418   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:02.512531   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:02.512561   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:02.585331   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:02.585359   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:02.585373   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:02.667376   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:02.667414   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:02.708101   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:02.708133   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:00.136170   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:02.138198   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:01.566456   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:04.066934   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:02.309906   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:04.310009   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:06.310084   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:05.259839   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:05.273102   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:05.273186   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:05.311745   72639 cri.go:89] found id: ""
	I1014 15:04:05.311768   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.311776   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:05.311787   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:05.311834   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:05.349313   72639 cri.go:89] found id: ""
	I1014 15:04:05.349336   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.349344   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:05.349352   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:05.349416   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:05.388003   72639 cri.go:89] found id: ""
	I1014 15:04:05.388026   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.388034   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:05.388039   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:05.388098   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:05.426636   72639 cri.go:89] found id: ""
	I1014 15:04:05.426665   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.426676   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:05.426683   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:05.426745   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:05.461945   72639 cri.go:89] found id: ""
	I1014 15:04:05.461974   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.461983   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:05.461989   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:05.462049   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:05.497099   72639 cri.go:89] found id: ""
	I1014 15:04:05.497130   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.497142   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:05.497149   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:05.497216   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:05.531621   72639 cri.go:89] found id: ""
	I1014 15:04:05.531652   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.531664   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:05.531671   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:05.531729   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:05.568950   72639 cri.go:89] found id: ""
	I1014 15:04:05.568973   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.568983   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:05.568992   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:05.569012   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:05.624806   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:05.624846   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:05.651912   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:05.651961   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:05.740342   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:05.740369   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:05.740384   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:05.817901   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:05.817932   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:04.636643   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:07.137525   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:06.566519   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:08.567458   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:08.809718   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:10.809968   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:08.360267   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:08.373249   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:08.373325   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:08.409485   72639 cri.go:89] found id: ""
	I1014 15:04:08.409520   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.409535   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:08.409542   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:08.409604   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:08.444977   72639 cri.go:89] found id: ""
	I1014 15:04:08.445000   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.445008   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:08.445014   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:08.445061   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:08.478080   72639 cri.go:89] found id: ""
	I1014 15:04:08.478108   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.478117   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:08.478123   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:08.478169   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:08.511510   72639 cri.go:89] found id: ""
	I1014 15:04:08.511536   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.511545   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:08.511552   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:08.511603   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:08.546260   72639 cri.go:89] found id: ""
	I1014 15:04:08.546285   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.546292   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:08.546299   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:08.546347   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:08.582775   72639 cri.go:89] found id: ""
	I1014 15:04:08.582799   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.582810   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:08.582816   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:08.582875   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:08.619208   72639 cri.go:89] found id: ""
	I1014 15:04:08.619231   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.619239   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:08.619244   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:08.619299   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:08.654823   72639 cri.go:89] found id: ""
	I1014 15:04:08.654849   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.654860   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:08.654870   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:08.654885   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:08.704543   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:08.704574   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:08.718111   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:08.718144   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:08.792267   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:08.792290   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:08.792309   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:08.870178   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:08.870210   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:11.409975   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:11.432171   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:11.432243   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:11.468997   72639 cri.go:89] found id: ""
	I1014 15:04:11.469021   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.469030   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:11.469035   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:11.469094   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:11.504312   72639 cri.go:89] found id: ""
	I1014 15:04:11.504337   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.504346   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:11.504354   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:11.504417   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:11.540628   72639 cri.go:89] found id: ""
	I1014 15:04:11.540654   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.540662   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:11.540667   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:11.540729   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:11.576466   72639 cri.go:89] found id: ""
	I1014 15:04:11.576491   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.576498   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:11.576506   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:11.576550   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:11.611466   72639 cri.go:89] found id: ""
	I1014 15:04:11.611501   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.611512   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:11.611519   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:11.611578   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:11.650089   72639 cri.go:89] found id: ""
	I1014 15:04:11.650116   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.650126   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:11.650133   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:11.650191   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:11.686538   72639 cri.go:89] found id: ""
	I1014 15:04:11.686563   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.686571   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:11.686577   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:11.686654   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:11.725494   72639 cri.go:89] found id: ""
	I1014 15:04:11.725517   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.725524   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:11.725532   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:11.725545   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:11.779062   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:11.779102   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:11.792726   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:11.792753   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:11.867945   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:11.867972   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:11.867986   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:11.952299   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:11.952340   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:09.636140   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:11.636455   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:14.136183   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:10.567626   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:13.065875   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:15.066484   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:13.310523   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:15.811094   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:14.493922   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:14.506754   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:14.506817   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:14.540456   72639 cri.go:89] found id: ""
	I1014 15:04:14.540480   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.540489   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:14.540495   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:14.540545   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:14.574819   72639 cri.go:89] found id: ""
	I1014 15:04:14.574843   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.574853   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:14.574859   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:14.574917   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:14.608834   72639 cri.go:89] found id: ""
	I1014 15:04:14.608859   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.608868   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:14.608873   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:14.608920   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:14.644182   72639 cri.go:89] found id: ""
	I1014 15:04:14.644210   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.644218   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:14.644223   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:14.644283   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:14.679113   72639 cri.go:89] found id: ""
	I1014 15:04:14.679145   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.679156   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:14.679164   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:14.679228   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:14.716111   72639 cri.go:89] found id: ""
	I1014 15:04:14.716142   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.716154   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:14.716167   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:14.716220   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:14.755884   72639 cri.go:89] found id: ""
	I1014 15:04:14.755907   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.755915   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:14.755920   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:14.755968   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:14.794167   72639 cri.go:89] found id: ""
	I1014 15:04:14.794195   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.794207   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:14.794217   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:14.794234   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:14.844828   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:14.844864   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:14.859424   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:14.859451   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:14.936660   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:14.936687   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:14.936703   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:15.017034   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:15.017070   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:17.555604   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:17.570628   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:17.570687   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:17.612919   72639 cri.go:89] found id: ""
	I1014 15:04:17.612943   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.612951   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:17.612956   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:17.613002   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:17.651178   72639 cri.go:89] found id: ""
	I1014 15:04:17.651210   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.651220   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:17.651226   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:17.651278   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:17.687923   72639 cri.go:89] found id: ""
	I1014 15:04:17.687955   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.687966   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:17.687973   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:17.688024   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:17.724759   72639 cri.go:89] found id: ""
	I1014 15:04:17.724790   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.724800   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:17.724807   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:17.724866   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:17.760189   72639 cri.go:89] found id: ""
	I1014 15:04:17.760212   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.760220   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:17.760226   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:17.760274   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:17.797517   72639 cri.go:89] found id: ""
	I1014 15:04:17.797541   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.797549   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:17.797554   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:17.797601   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:17.833238   72639 cri.go:89] found id: ""
	I1014 15:04:17.833261   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.833270   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:17.833275   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:17.833321   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:17.868828   72639 cri.go:89] found id: ""
	I1014 15:04:17.868857   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.868865   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:17.868873   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:17.868883   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:17.956972   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:17.957011   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:16.137357   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:18.636865   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:17.067415   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:19.566146   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:18.310380   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:20.809526   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:18.006354   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:18.006390   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:18.056237   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:18.056271   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:18.070763   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:18.070792   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:18.147471   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:20.648238   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:20.661465   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:20.661534   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:20.695869   72639 cri.go:89] found id: ""
	I1014 15:04:20.695894   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.695902   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:20.695907   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:20.695957   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:20.729271   72639 cri.go:89] found id: ""
	I1014 15:04:20.729295   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.729313   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:20.729319   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:20.729364   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:20.767110   72639 cri.go:89] found id: ""
	I1014 15:04:20.767137   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.767147   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:20.767154   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:20.767209   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:20.802752   72639 cri.go:89] found id: ""
	I1014 15:04:20.802781   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.802791   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:20.802798   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:20.802846   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:20.841958   72639 cri.go:89] found id: ""
	I1014 15:04:20.841987   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.841998   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:20.842005   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:20.842066   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:20.878869   72639 cri.go:89] found id: ""
	I1014 15:04:20.878896   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.878907   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:20.878914   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:20.878974   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:20.913802   72639 cri.go:89] found id: ""
	I1014 15:04:20.913838   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.913852   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:20.913861   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:20.913922   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:20.948350   72639 cri.go:89] found id: ""
	I1014 15:04:20.948378   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.948395   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:20.948403   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:20.948416   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:21.001065   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:21.001098   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:21.014427   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:21.014458   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:21.091386   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:21.091412   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:21.091432   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:21.175255   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:21.175299   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:21.137358   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:23.636623   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:22.066415   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:24.066476   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:22.809925   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:25.309528   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:23.718260   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:23.732366   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:23.732445   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:23.767269   72639 cri.go:89] found id: ""
	I1014 15:04:23.767299   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.767311   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:23.767317   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:23.767379   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:23.808502   72639 cri.go:89] found id: ""
	I1014 15:04:23.808532   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.808543   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:23.808550   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:23.808606   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:23.845632   72639 cri.go:89] found id: ""
	I1014 15:04:23.845664   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.845677   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:23.845685   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:23.845753   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:23.880218   72639 cri.go:89] found id: ""
	I1014 15:04:23.880249   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.880261   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:23.880268   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:23.880332   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:23.915674   72639 cri.go:89] found id: ""
	I1014 15:04:23.915697   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.915705   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:23.915710   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:23.915767   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:23.950526   72639 cri.go:89] found id: ""
	I1014 15:04:23.950559   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.950570   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:23.950578   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:23.950656   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:23.986130   72639 cri.go:89] found id: ""
	I1014 15:04:23.986167   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.986178   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:23.986186   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:23.986246   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:24.027112   72639 cri.go:89] found id: ""
	I1014 15:04:24.027141   72639 logs.go:282] 0 containers: []
	W1014 15:04:24.027154   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:24.027165   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:24.027181   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:24.082559   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:24.082610   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:24.096900   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:24.096929   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:24.173293   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:24.173327   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:24.173341   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:24.256921   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:24.256962   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:26.802073   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:26.817307   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:26.817366   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:26.855777   72639 cri.go:89] found id: ""
	I1014 15:04:26.855805   72639 logs.go:282] 0 containers: []
	W1014 15:04:26.855817   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:26.855825   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:26.855876   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:26.892260   72639 cri.go:89] found id: ""
	I1014 15:04:26.892288   72639 logs.go:282] 0 containers: []
	W1014 15:04:26.892300   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:26.892308   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:26.892369   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:26.931066   72639 cri.go:89] found id: ""
	I1014 15:04:26.931103   72639 logs.go:282] 0 containers: []
	W1014 15:04:26.931114   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:26.931122   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:26.931174   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:26.966890   72639 cri.go:89] found id: ""
	I1014 15:04:26.966923   72639 logs.go:282] 0 containers: []
	W1014 15:04:26.966933   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:26.966941   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:26.967002   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:27.001338   72639 cri.go:89] found id: ""
	I1014 15:04:27.001368   72639 logs.go:282] 0 containers: []
	W1014 15:04:27.001379   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:27.001386   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:27.001454   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:27.041798   72639 cri.go:89] found id: ""
	I1014 15:04:27.041830   72639 logs.go:282] 0 containers: []
	W1014 15:04:27.041839   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:27.041844   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:27.041905   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:27.080248   72639 cri.go:89] found id: ""
	I1014 15:04:27.080279   72639 logs.go:282] 0 containers: []
	W1014 15:04:27.080288   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:27.080293   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:27.080341   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:27.116207   72639 cri.go:89] found id: ""
	I1014 15:04:27.116234   72639 logs.go:282] 0 containers: []
	W1014 15:04:27.116242   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:27.116250   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:27.116264   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:27.191149   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:27.191174   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:27.191203   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:27.275771   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:27.275808   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:27.323223   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:27.323254   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:27.375409   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:27.375455   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:26.137156   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:28.637895   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:26.066790   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:28.565208   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:27.810315   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:30.309211   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:29.890408   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:29.904797   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:29.904853   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:29.938655   72639 cri.go:89] found id: ""
	I1014 15:04:29.938685   72639 logs.go:282] 0 containers: []
	W1014 15:04:29.938698   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:29.938705   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:29.938765   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:29.976477   72639 cri.go:89] found id: ""
	I1014 15:04:29.976508   72639 logs.go:282] 0 containers: []
	W1014 15:04:29.976519   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:29.976526   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:29.976583   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:30.014813   72639 cri.go:89] found id: ""
	I1014 15:04:30.014842   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.014853   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:30.014860   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:30.014926   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:30.050804   72639 cri.go:89] found id: ""
	I1014 15:04:30.050833   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.050844   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:30.050854   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:30.050918   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:30.087921   72639 cri.go:89] found id: ""
	I1014 15:04:30.087946   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.087954   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:30.087959   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:30.088016   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:30.125411   72639 cri.go:89] found id: ""
	I1014 15:04:30.125446   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.125458   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:30.125465   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:30.125519   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:30.162067   72639 cri.go:89] found id: ""
	I1014 15:04:30.162099   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.162110   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:30.162118   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:30.162181   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:30.200376   72639 cri.go:89] found id: ""
	I1014 15:04:30.200406   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.200418   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:30.200435   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:30.200451   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:30.279965   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:30.279992   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:30.280007   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:30.364866   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:30.364900   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:30.408808   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:30.408842   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:30.464473   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:30.464507   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:32.980254   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:32.994254   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:32.994320   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:31.136531   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:33.137201   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:30.566228   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:32.567393   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:35.065955   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:32.810349   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:35.308794   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:33.035996   72639 cri.go:89] found id: ""
	I1014 15:04:33.036025   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.036036   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:33.036043   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:33.036103   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:33.077494   72639 cri.go:89] found id: ""
	I1014 15:04:33.077522   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.077531   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:33.077538   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:33.077585   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:33.112666   72639 cri.go:89] found id: ""
	I1014 15:04:33.112695   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.112705   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:33.112711   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:33.112772   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:33.150229   72639 cri.go:89] found id: ""
	I1014 15:04:33.150266   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.150276   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:33.150282   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:33.150336   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:33.186960   72639 cri.go:89] found id: ""
	I1014 15:04:33.186989   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.187001   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:33.187008   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:33.187062   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:33.223596   72639 cri.go:89] found id: ""
	I1014 15:04:33.223631   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.223641   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:33.223647   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:33.223711   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:33.260137   72639 cri.go:89] found id: ""
	I1014 15:04:33.260162   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.260170   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:33.260175   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:33.260228   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:33.298072   72639 cri.go:89] found id: ""
	I1014 15:04:33.298095   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.298103   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:33.298110   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:33.298121   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:33.379587   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:33.379623   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:33.423427   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:33.423456   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:33.474644   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:33.474683   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:33.488324   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:33.488354   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:33.556257   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:36.056955   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:36.072461   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:36.072536   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:36.109467   72639 cri.go:89] found id: ""
	I1014 15:04:36.109493   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.109502   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:36.109509   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:36.109561   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:36.147985   72639 cri.go:89] found id: ""
	I1014 15:04:36.148012   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.148020   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:36.148025   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:36.148071   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:36.183885   72639 cri.go:89] found id: ""
	I1014 15:04:36.183906   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.183914   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:36.183919   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:36.183968   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:36.220994   72639 cri.go:89] found id: ""
	I1014 15:04:36.221025   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.221036   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:36.221044   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:36.221108   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:36.256586   72639 cri.go:89] found id: ""
	I1014 15:04:36.256612   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.256621   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:36.256627   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:36.256683   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:36.293229   72639 cri.go:89] found id: ""
	I1014 15:04:36.293256   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.293265   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:36.293272   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:36.293339   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:36.329254   72639 cri.go:89] found id: ""
	I1014 15:04:36.329279   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.329290   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:36.329297   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:36.329357   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:36.366495   72639 cri.go:89] found id: ""
	I1014 15:04:36.366526   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.366538   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:36.366548   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:36.366561   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:36.420985   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:36.421018   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:36.435532   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:36.435565   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:36.510459   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:36.510484   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:36.510499   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:36.593057   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:36.593094   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:35.637182   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:37.637348   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:37.066334   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:39.566950   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:37.309629   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:39.809500   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:39.138570   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:39.152280   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:39.152342   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:39.186647   72639 cri.go:89] found id: ""
	I1014 15:04:39.186676   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.186687   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:39.186694   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:39.186754   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:39.223560   72639 cri.go:89] found id: ""
	I1014 15:04:39.223586   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.223594   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:39.223599   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:39.223644   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:39.257835   72639 cri.go:89] found id: ""
	I1014 15:04:39.257867   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.257879   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:39.257886   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:39.257947   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:39.294656   72639 cri.go:89] found id: ""
	I1014 15:04:39.294684   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.294692   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:39.294699   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:39.294750   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:39.333474   72639 cri.go:89] found id: ""
	I1014 15:04:39.333503   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.333513   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:39.333520   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:39.333586   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:39.374385   72639 cri.go:89] found id: ""
	I1014 15:04:39.374414   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.374424   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:39.374435   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:39.374483   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:39.412856   72639 cri.go:89] found id: ""
	I1014 15:04:39.412888   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.412899   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:39.412906   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:39.412966   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:39.463087   72639 cri.go:89] found id: ""
	I1014 15:04:39.463115   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.463127   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:39.463138   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:39.463154   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:39.514309   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:39.514342   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:39.528947   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:39.528972   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:39.603984   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:39.604004   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:39.604016   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:39.685053   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:39.685093   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:42.234178   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:42.247421   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:42.247497   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:42.288496   72639 cri.go:89] found id: ""
	I1014 15:04:42.288521   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.288529   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:42.288535   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:42.288588   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:42.324346   72639 cri.go:89] found id: ""
	I1014 15:04:42.324382   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.324394   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:42.324401   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:42.324469   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:42.362879   72639 cri.go:89] found id: ""
	I1014 15:04:42.362910   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.362922   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:42.362928   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:42.362991   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:42.399347   72639 cri.go:89] found id: ""
	I1014 15:04:42.399375   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.399383   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:42.399389   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:42.399473   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:42.434942   72639 cri.go:89] found id: ""
	I1014 15:04:42.434971   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.434990   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:42.434999   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:42.435063   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:42.470886   72639 cri.go:89] found id: ""
	I1014 15:04:42.470916   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.470928   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:42.470934   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:42.470994   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:42.510713   72639 cri.go:89] found id: ""
	I1014 15:04:42.510742   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.510752   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:42.510758   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:42.510820   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:42.544506   72639 cri.go:89] found id: ""
	I1014 15:04:42.544538   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.544547   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:42.544559   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:42.544570   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:42.588658   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:42.588694   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:42.642165   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:42.642198   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:42.658073   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:42.658110   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:42.730486   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:42.730510   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:42.730524   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:39.637476   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:41.637715   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:44.137654   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:42.065534   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:44.066309   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:41.809932   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:44.309377   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:46.309699   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:45.307806   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:45.321664   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:45.321733   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:45.359670   72639 cri.go:89] found id: ""
	I1014 15:04:45.359697   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.359708   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:45.359715   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:45.359781   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:45.398673   72639 cri.go:89] found id: ""
	I1014 15:04:45.398703   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.398715   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:45.398722   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:45.398784   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:45.441656   72639 cri.go:89] found id: ""
	I1014 15:04:45.441685   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.441697   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:45.441705   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:45.441768   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:45.476159   72639 cri.go:89] found id: ""
	I1014 15:04:45.476188   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.476195   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:45.476201   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:45.476263   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:45.513776   72639 cri.go:89] found id: ""
	I1014 15:04:45.513807   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.513819   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:45.513828   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:45.513894   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:45.550336   72639 cri.go:89] found id: ""
	I1014 15:04:45.550371   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.550382   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:45.550388   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:45.550450   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:45.586668   72639 cri.go:89] found id: ""
	I1014 15:04:45.586697   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.586705   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:45.586711   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:45.586760   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:45.622530   72639 cri.go:89] found id: ""
	I1014 15:04:45.622559   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.622568   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:45.622576   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:45.622589   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:45.674471   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:45.674504   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:45.690430   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:45.690463   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:45.772133   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:45.772165   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:45.772181   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:45.859835   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:45.859880   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:46.636239   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:48.637696   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:46.565440   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:48.569076   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:48.309788   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:50.310209   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:48.434011   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:48.448747   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:48.448826   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:48.493642   72639 cri.go:89] found id: ""
	I1014 15:04:48.493668   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.493680   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:48.493687   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:48.493747   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:48.530298   72639 cri.go:89] found id: ""
	I1014 15:04:48.530327   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.530336   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:48.530344   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:48.530403   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:48.566215   72639 cri.go:89] found id: ""
	I1014 15:04:48.566242   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.566252   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:48.566261   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:48.566325   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:48.604528   72639 cri.go:89] found id: ""
	I1014 15:04:48.604553   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.604561   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:48.604566   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:48.604616   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:48.646152   72639 cri.go:89] found id: ""
	I1014 15:04:48.646180   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.646191   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:48.646198   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:48.646257   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:48.682670   72639 cri.go:89] found id: ""
	I1014 15:04:48.682696   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.682704   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:48.682711   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:48.682762   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:48.722292   72639 cri.go:89] found id: ""
	I1014 15:04:48.722318   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.722326   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:48.722335   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:48.722400   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:48.762474   72639 cri.go:89] found id: ""
	I1014 15:04:48.762506   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.762518   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:48.762528   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:48.762553   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:48.776628   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:48.776652   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:48.849904   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:48.849928   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:48.849941   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:48.927033   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:48.927068   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:48.970775   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:48.970807   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:51.521113   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:51.535318   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:51.535389   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:51.582631   72639 cri.go:89] found id: ""
	I1014 15:04:51.582658   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.582666   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:51.582671   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:51.582721   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:51.655323   72639 cri.go:89] found id: ""
	I1014 15:04:51.655362   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.655371   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:51.655376   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:51.655433   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:51.722837   72639 cri.go:89] found id: ""
	I1014 15:04:51.722863   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.722875   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:51.722882   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:51.722939   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:51.759917   72639 cri.go:89] found id: ""
	I1014 15:04:51.759946   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.759957   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:51.759963   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:51.760023   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:51.798656   72639 cri.go:89] found id: ""
	I1014 15:04:51.798689   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.798702   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:51.798711   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:51.798777   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:51.839285   72639 cri.go:89] found id: ""
	I1014 15:04:51.839312   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.839324   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:51.839334   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:51.839391   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:51.876997   72639 cri.go:89] found id: ""
	I1014 15:04:51.877028   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.877038   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:51.877045   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:51.877091   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:51.913991   72639 cri.go:89] found id: ""
	I1014 15:04:51.914020   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.914028   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:51.914036   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:51.914046   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:51.993392   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:51.993427   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:52.039722   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:52.039756   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:52.090901   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:52.090937   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:52.105014   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:52.105052   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:52.175505   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:51.137343   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:53.636660   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:50.575054   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:53.067208   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:52.809933   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:54.810498   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:54.676549   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:54.690113   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:54.690204   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:54.726478   72639 cri.go:89] found id: ""
	I1014 15:04:54.726511   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.726523   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:54.726538   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:54.726611   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:54.764990   72639 cri.go:89] found id: ""
	I1014 15:04:54.765017   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.765025   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:54.765031   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:54.765095   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:54.804779   72639 cri.go:89] found id: ""
	I1014 15:04:54.804808   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.804819   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:54.804828   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:54.804886   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:54.848657   72639 cri.go:89] found id: ""
	I1014 15:04:54.848682   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.848698   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:54.848705   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:54.848765   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:54.886806   72639 cri.go:89] found id: ""
	I1014 15:04:54.886834   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.886845   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:54.886853   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:54.886912   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:54.923297   72639 cri.go:89] found id: ""
	I1014 15:04:54.923323   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.923330   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:54.923335   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:54.923380   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:54.966297   72639 cri.go:89] found id: ""
	I1014 15:04:54.966321   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.966329   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:54.966334   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:54.966382   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:55.012047   72639 cri.go:89] found id: ""
	I1014 15:04:55.012071   72639 logs.go:282] 0 containers: []
	W1014 15:04:55.012079   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:55.012087   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:55.012097   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:55.066031   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:55.066063   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:55.080954   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:55.080981   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:55.159644   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:55.159670   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:55.159683   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:55.243303   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:55.243341   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:57.784555   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:57.799051   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:57.799132   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:57.841084   72639 cri.go:89] found id: ""
	I1014 15:04:57.841108   72639 logs.go:282] 0 containers: []
	W1014 15:04:57.841115   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:57.841121   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:57.841167   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:57.881510   72639 cri.go:89] found id: ""
	I1014 15:04:57.881542   72639 logs.go:282] 0 containers: []
	W1014 15:04:57.881555   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:57.881562   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:57.881624   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:57.916893   72639 cri.go:89] found id: ""
	I1014 15:04:57.916923   72639 logs.go:282] 0 containers: []
	W1014 15:04:57.916934   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:57.916940   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:57.916988   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:57.956991   72639 cri.go:89] found id: ""
	I1014 15:04:57.957023   72639 logs.go:282] 0 containers: []
	W1014 15:04:57.957036   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:57.957046   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:57.957118   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:57.993765   72639 cri.go:89] found id: ""
	I1014 15:04:57.993792   72639 logs.go:282] 0 containers: []
	W1014 15:04:57.993803   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:57.993809   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:57.993869   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:56.136994   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:58.137736   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:55.566021   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:57.567950   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:00.068276   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:57.310643   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:59.808898   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:58.032044   72639 cri.go:89] found id: ""
	I1014 15:04:58.032085   72639 logs.go:282] 0 containers: []
	W1014 15:04:58.032098   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:58.032107   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:58.032173   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:58.069733   72639 cri.go:89] found id: ""
	I1014 15:04:58.069754   72639 logs.go:282] 0 containers: []
	W1014 15:04:58.069762   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:58.069767   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:58.069813   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:58.105851   72639 cri.go:89] found id: ""
	I1014 15:04:58.105880   72639 logs.go:282] 0 containers: []
	W1014 15:04:58.105891   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:58.105901   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:58.105914   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:58.159922   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:58.159956   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:58.173779   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:58.173802   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:58.253551   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:58.253576   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:58.253591   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:58.342607   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:58.342647   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:00.884705   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:00.900147   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:00.900215   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:00.940372   72639 cri.go:89] found id: ""
	I1014 15:05:00.940402   72639 logs.go:282] 0 containers: []
	W1014 15:05:00.940413   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:00.940420   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:00.940489   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:00.981400   72639 cri.go:89] found id: ""
	I1014 15:05:00.981431   72639 logs.go:282] 0 containers: []
	W1014 15:05:00.981441   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:00.981447   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:00.981517   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:01.021981   72639 cri.go:89] found id: ""
	I1014 15:05:01.022002   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.022011   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:01.022016   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:01.022067   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:01.056976   72639 cri.go:89] found id: ""
	I1014 15:05:01.057005   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.057013   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:01.057020   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:01.057063   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:01.092702   72639 cri.go:89] found id: ""
	I1014 15:05:01.092732   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.092739   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:01.092745   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:01.092803   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:01.128861   72639 cri.go:89] found id: ""
	I1014 15:05:01.128892   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.128902   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:01.128908   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:01.128958   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:01.162672   72639 cri.go:89] found id: ""
	I1014 15:05:01.162702   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.162712   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:01.162719   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:01.162791   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:01.202724   72639 cri.go:89] found id: ""
	I1014 15:05:01.202751   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.202761   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:01.202770   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:01.202785   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:01.280702   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:01.280723   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:01.280735   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:01.362909   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:01.362943   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:01.406737   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:01.406766   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:01.460090   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:01.460125   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:00.636730   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:03.136587   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:02.568415   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:05.066568   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:01.809661   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:04.309079   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:06.309544   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:03.975661   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:03.989811   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:03.989874   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:04.028396   72639 cri.go:89] found id: ""
	I1014 15:05:04.028426   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.028438   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:04.028445   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:04.028499   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:04.065871   72639 cri.go:89] found id: ""
	I1014 15:05:04.065901   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.065912   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:04.065919   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:04.065980   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:04.103155   72639 cri.go:89] found id: ""
	I1014 15:05:04.103184   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.103192   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:04.103198   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:04.103248   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:04.139503   72639 cri.go:89] found id: ""
	I1014 15:05:04.139531   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.139539   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:04.139545   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:04.139601   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:04.171638   72639 cri.go:89] found id: ""
	I1014 15:05:04.171663   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.171671   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:04.171676   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:04.171734   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:04.213720   72639 cri.go:89] found id: ""
	I1014 15:05:04.213751   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.213760   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:04.213766   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:04.213815   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:04.248088   72639 cri.go:89] found id: ""
	I1014 15:05:04.248109   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.248117   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:04.248121   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:04.248183   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:04.286454   72639 cri.go:89] found id: ""
	I1014 15:05:04.286479   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.286487   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:04.286495   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:04.286506   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:04.339564   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:04.339599   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:04.353034   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:04.353061   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:04.432764   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:04.432786   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:04.432797   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:04.514561   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:04.514613   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:07.057507   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:07.072798   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:07.072873   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:07.113672   72639 cri.go:89] found id: ""
	I1014 15:05:07.113694   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.113701   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:07.113706   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:07.113761   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:07.149321   72639 cri.go:89] found id: ""
	I1014 15:05:07.149348   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.149357   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:07.149362   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:07.149416   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:07.185717   72639 cri.go:89] found id: ""
	I1014 15:05:07.185748   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.185760   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:07.185768   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:07.185822   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:07.225747   72639 cri.go:89] found id: ""
	I1014 15:05:07.225772   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.225783   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:07.225791   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:07.225843   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:07.265834   72639 cri.go:89] found id: ""
	I1014 15:05:07.265864   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.265875   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:07.265882   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:07.265944   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:07.300595   72639 cri.go:89] found id: ""
	I1014 15:05:07.300622   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.300631   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:07.300637   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:07.300686   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:07.343249   72639 cri.go:89] found id: ""
	I1014 15:05:07.343280   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.343291   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:07.343298   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:07.343365   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:07.379525   72639 cri.go:89] found id: ""
	I1014 15:05:07.379549   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.379557   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:07.379564   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:07.379576   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:07.393622   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:07.393653   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:07.473973   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:07.473998   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:07.474013   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:07.556937   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:07.556971   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:07.602224   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:07.602249   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:05.137157   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:07.137297   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:09.137708   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:07.066795   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:09.566723   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:08.809562   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:11.309821   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:10.156920   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:10.170971   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:10.171037   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:10.206568   72639 cri.go:89] found id: ""
	I1014 15:05:10.206610   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.206623   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:10.206630   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:10.206689   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:10.249075   72639 cri.go:89] found id: ""
	I1014 15:05:10.249101   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.249110   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:10.249121   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:10.249175   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:10.285620   72639 cri.go:89] found id: ""
	I1014 15:05:10.285649   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.285660   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:10.285667   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:10.285730   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:10.322291   72639 cri.go:89] found id: ""
	I1014 15:05:10.322314   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.322322   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:10.322327   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:10.322379   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:10.356691   72639 cri.go:89] found id: ""
	I1014 15:05:10.356720   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.356730   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:10.356738   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:10.356802   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:10.401192   72639 cri.go:89] found id: ""
	I1014 15:05:10.401223   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.401234   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:10.401242   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:10.401303   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:10.438198   72639 cri.go:89] found id: ""
	I1014 15:05:10.438225   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.438236   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:10.438243   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:10.438380   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:10.474142   72639 cri.go:89] found id: ""
	I1014 15:05:10.474166   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.474174   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:10.474181   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:10.474193   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:10.546549   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:10.546569   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:10.546582   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:10.624235   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:10.624268   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:10.664896   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:10.664926   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:10.719425   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:10.719464   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:11.637824   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:14.139552   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:11.566806   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:14.066803   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:13.809728   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:16.310153   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:13.234162   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:13.247614   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:13.247689   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:13.285040   72639 cri.go:89] found id: ""
	I1014 15:05:13.285068   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.285078   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:13.285086   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:13.285154   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:13.334084   72639 cri.go:89] found id: ""
	I1014 15:05:13.334125   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.334133   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:13.334139   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:13.334204   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:13.369164   72639 cri.go:89] found id: ""
	I1014 15:05:13.369199   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.369211   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:13.369223   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:13.369285   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:13.405202   72639 cri.go:89] found id: ""
	I1014 15:05:13.405232   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.405244   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:13.405252   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:13.405304   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:13.443271   72639 cri.go:89] found id: ""
	I1014 15:05:13.443302   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.443311   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:13.443317   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:13.443369   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:13.483541   72639 cri.go:89] found id: ""
	I1014 15:05:13.483570   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.483580   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:13.483588   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:13.483650   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:13.518580   72639 cri.go:89] found id: ""
	I1014 15:05:13.518622   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.518633   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:13.518641   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:13.518701   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:13.553638   72639 cri.go:89] found id: ""
	I1014 15:05:13.553668   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.553678   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:13.553688   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:13.553702   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:13.605379   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:13.605413   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:13.620525   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:13.620556   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:13.699628   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:13.699658   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:13.699672   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:13.778006   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:13.778046   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:16.316703   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:16.331511   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:16.331577   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:16.367045   72639 cri.go:89] found id: ""
	I1014 15:05:16.367075   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.367083   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:16.367089   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:16.367144   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:16.403240   72639 cri.go:89] found id: ""
	I1014 15:05:16.403264   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.403274   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:16.403285   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:16.403344   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:16.438570   72639 cri.go:89] found id: ""
	I1014 15:05:16.438612   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.438625   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:16.438632   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:16.438694   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:16.477153   72639 cri.go:89] found id: ""
	I1014 15:05:16.477174   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.477182   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:16.477187   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:16.477232   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:16.516308   72639 cri.go:89] found id: ""
	I1014 15:05:16.516336   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.516348   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:16.516355   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:16.516421   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:16.551337   72639 cri.go:89] found id: ""
	I1014 15:05:16.551365   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.551375   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:16.551383   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:16.551450   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:16.587073   72639 cri.go:89] found id: ""
	I1014 15:05:16.587105   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.587117   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:16.587125   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:16.587183   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:16.623940   72639 cri.go:89] found id: ""
	I1014 15:05:16.623962   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.623970   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:16.623978   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:16.623989   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:16.671593   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:16.671618   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:16.723057   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:16.723092   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:16.737623   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:16.737656   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:16.809539   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:16.809569   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:16.809592   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:16.636818   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:18.637340   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:16.566523   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:19.065985   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:18.809554   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:21.309691   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:19.390406   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:19.404850   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:19.404928   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:19.446931   72639 cri.go:89] found id: ""
	I1014 15:05:19.446962   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.446973   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:19.446980   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:19.447043   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:19.488112   72639 cri.go:89] found id: ""
	I1014 15:05:19.488136   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.488144   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:19.488150   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:19.488208   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:19.523333   72639 cri.go:89] found id: ""
	I1014 15:05:19.523365   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.523382   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:19.523389   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:19.523447   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:19.557887   72639 cri.go:89] found id: ""
	I1014 15:05:19.557910   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.557918   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:19.557927   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:19.557972   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:19.593792   72639 cri.go:89] found id: ""
	I1014 15:05:19.593815   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.593822   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:19.593873   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:19.593922   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:19.628291   72639 cri.go:89] found id: ""
	I1014 15:05:19.628324   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.628335   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:19.628343   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:19.628405   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:19.664088   72639 cri.go:89] found id: ""
	I1014 15:05:19.664118   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.664130   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:19.664138   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:19.664211   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:19.700825   72639 cri.go:89] found id: ""
	I1014 15:05:19.700853   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.700863   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:19.700873   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:19.700886   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:19.741631   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:19.741666   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:19.792667   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:19.792706   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:19.806928   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:19.806965   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:19.880030   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:19.880059   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:19.880073   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:22.465251   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:22.479031   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:22.479096   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:22.519123   72639 cri.go:89] found id: ""
	I1014 15:05:22.519147   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.519158   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:22.519171   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:22.519235   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:22.552250   72639 cri.go:89] found id: ""
	I1014 15:05:22.552277   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.552287   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:22.552294   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:22.552354   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:22.594213   72639 cri.go:89] found id: ""
	I1014 15:05:22.594243   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.594253   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:22.594261   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:22.594310   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:22.630081   72639 cri.go:89] found id: ""
	I1014 15:05:22.630110   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.630121   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:22.630129   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:22.630195   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:22.665454   72639 cri.go:89] found id: ""
	I1014 15:05:22.665485   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.665497   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:22.665505   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:22.665568   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:22.710697   72639 cri.go:89] found id: ""
	I1014 15:05:22.710725   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.710734   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:22.710742   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:22.710798   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:22.748486   72639 cri.go:89] found id: ""
	I1014 15:05:22.748516   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.748527   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:22.748534   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:22.748594   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:22.784646   72639 cri.go:89] found id: ""
	I1014 15:05:22.784674   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.784684   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:22.784695   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:22.784709   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:22.797853   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:22.797880   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:22.875382   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:22.875406   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:22.875422   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:22.957055   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:22.957089   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:20.638448   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:23.137051   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:21.066950   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:23.566775   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:23.309958   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:25.810168   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:23.008642   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:23.008672   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:25.561277   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:25.575543   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:25.575606   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:25.614260   72639 cri.go:89] found id: ""
	I1014 15:05:25.614283   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.614291   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:25.614296   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:25.614353   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:25.654267   72639 cri.go:89] found id: ""
	I1014 15:05:25.654295   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.654307   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:25.654314   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:25.654385   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:25.707597   72639 cri.go:89] found id: ""
	I1014 15:05:25.707626   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.707637   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:25.707644   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:25.707707   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:25.747477   72639 cri.go:89] found id: ""
	I1014 15:05:25.747500   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.747508   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:25.747513   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:25.747571   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:25.785245   72639 cri.go:89] found id: ""
	I1014 15:05:25.785270   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.785279   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:25.785288   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:25.785342   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:25.820619   72639 cri.go:89] found id: ""
	I1014 15:05:25.820643   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.820651   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:25.820665   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:25.820722   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:25.861644   72639 cri.go:89] found id: ""
	I1014 15:05:25.861665   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.861673   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:25.861678   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:25.861724   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:25.901009   72639 cri.go:89] found id: ""
	I1014 15:05:25.901032   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.901046   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:25.901056   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:25.901068   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:25.942918   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:25.942941   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:25.993931   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:25.993964   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:26.008252   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:26.008280   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:26.087316   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:26.087336   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:26.087347   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:25.636727   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:27.637053   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:26.066529   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:28.567224   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:28.308855   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:30.811310   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:28.667377   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:28.682586   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:28.682682   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:28.729576   72639 cri.go:89] found id: ""
	I1014 15:05:28.729600   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.729608   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:28.729614   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:28.729673   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:28.766637   72639 cri.go:89] found id: ""
	I1014 15:05:28.766669   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.766682   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:28.766690   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:28.766762   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:28.802280   72639 cri.go:89] found id: ""
	I1014 15:05:28.802308   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.802317   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:28.802322   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:28.802395   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:28.840788   72639 cri.go:89] found id: ""
	I1014 15:05:28.840822   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.840833   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:28.840841   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:28.840898   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:28.878403   72639 cri.go:89] found id: ""
	I1014 15:05:28.878437   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.878447   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:28.878453   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:28.878505   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:28.919054   72639 cri.go:89] found id: ""
	I1014 15:05:28.919082   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.919090   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:28.919096   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:28.919146   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:28.955097   72639 cri.go:89] found id: ""
	I1014 15:05:28.955124   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.955134   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:28.955142   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:28.955214   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:28.995681   72639 cri.go:89] found id: ""
	I1014 15:05:28.995711   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.995722   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:28.995731   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:28.995746   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:29.073041   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:29.073066   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:29.073083   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:29.152803   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:29.152838   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:29.192205   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:29.192239   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:29.248128   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:29.248166   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:31.762647   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:31.776372   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:31.776454   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:31.812234   72639 cri.go:89] found id: ""
	I1014 15:05:31.812259   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.812268   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:31.812275   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:31.812347   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:31.850248   72639 cri.go:89] found id: ""
	I1014 15:05:31.850277   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.850294   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:31.850301   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:31.850363   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:31.887768   72639 cri.go:89] found id: ""
	I1014 15:05:31.887796   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.887808   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:31.887816   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:31.887870   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:31.923434   72639 cri.go:89] found id: ""
	I1014 15:05:31.923464   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.923476   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:31.923483   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:31.923547   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:31.961027   72639 cri.go:89] found id: ""
	I1014 15:05:31.961055   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.961066   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:31.961073   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:31.961135   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:31.996222   72639 cri.go:89] found id: ""
	I1014 15:05:31.996250   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.996260   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:31.996267   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:31.996329   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:32.034396   72639 cri.go:89] found id: ""
	I1014 15:05:32.034441   72639 logs.go:282] 0 containers: []
	W1014 15:05:32.034452   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:32.034460   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:32.034528   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:32.080105   72639 cri.go:89] found id: ""
	I1014 15:05:32.080142   72639 logs.go:282] 0 containers: []
	W1014 15:05:32.080153   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:32.080164   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:32.080178   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:32.161120   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:32.161151   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:32.213511   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:32.213546   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:32.271250   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:32.271287   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:32.285452   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:32.285483   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:32.366108   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:30.136896   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:32.138906   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:31.066229   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:33.066370   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:35.067821   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:33.309846   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:35.310713   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:34.867317   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:34.882058   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:34.882125   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:34.926220   72639 cri.go:89] found id: ""
	I1014 15:05:34.926251   72639 logs.go:282] 0 containers: []
	W1014 15:05:34.926261   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:34.926268   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:34.926341   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:34.965657   72639 cri.go:89] found id: ""
	I1014 15:05:34.965691   72639 logs.go:282] 0 containers: []
	W1014 15:05:34.965702   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:34.965709   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:34.965775   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:35.002422   72639 cri.go:89] found id: ""
	I1014 15:05:35.002446   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.002454   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:35.002459   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:35.002523   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:35.040029   72639 cri.go:89] found id: ""
	I1014 15:05:35.040057   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.040067   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:35.040073   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:35.040137   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:35.077041   72639 cri.go:89] found id: ""
	I1014 15:05:35.077067   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.077075   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:35.077080   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:35.077129   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:35.113723   72639 cri.go:89] found id: ""
	I1014 15:05:35.113754   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.113763   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:35.113770   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:35.113854   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:35.152003   72639 cri.go:89] found id: ""
	I1014 15:05:35.152025   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.152033   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:35.152038   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:35.152084   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:35.186707   72639 cri.go:89] found id: ""
	I1014 15:05:35.186735   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.186746   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:35.186756   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:35.186769   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:35.267899   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:35.267941   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:35.310382   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:35.310414   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:35.364811   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:35.364852   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:35.378359   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:35.378386   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:35.453522   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:37.953807   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:37.967515   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:37.967579   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:34.637257   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:37.137643   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:37.566344   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:39.566704   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:37.810414   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:40.308798   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:38.007923   72639 cri.go:89] found id: ""
	I1014 15:05:38.007955   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.007964   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:38.007969   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:38.008023   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:38.047451   72639 cri.go:89] found id: ""
	I1014 15:05:38.047476   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.047484   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:38.047490   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:38.047542   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:38.087141   72639 cri.go:89] found id: ""
	I1014 15:05:38.087165   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.087174   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:38.087186   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:38.087234   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:38.126556   72639 cri.go:89] found id: ""
	I1014 15:05:38.126583   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.126604   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:38.126612   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:38.126670   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:38.165318   72639 cri.go:89] found id: ""
	I1014 15:05:38.165341   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.165350   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:38.165356   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:38.165400   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:38.199498   72639 cri.go:89] found id: ""
	I1014 15:05:38.199533   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.199544   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:38.199553   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:38.199618   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:38.235030   72639 cri.go:89] found id: ""
	I1014 15:05:38.235058   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.235067   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:38.235072   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:38.235129   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:38.268900   72639 cri.go:89] found id: ""
	I1014 15:05:38.268926   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.268935   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:38.268943   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:38.268957   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:38.282503   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:38.282532   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:38.357943   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:38.357972   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:38.357987   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:38.448417   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:38.448453   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:38.490023   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:38.490049   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:41.045691   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:41.061188   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:41.061251   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:41.102885   72639 cri.go:89] found id: ""
	I1014 15:05:41.102909   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.102917   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:41.102923   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:41.102971   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:41.139402   72639 cri.go:89] found id: ""
	I1014 15:05:41.139427   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.139437   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:41.139444   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:41.139501   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:41.179881   72639 cri.go:89] found id: ""
	I1014 15:05:41.179926   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.179939   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:41.179946   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:41.180008   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:41.215861   72639 cri.go:89] found id: ""
	I1014 15:05:41.215897   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.215910   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:41.215919   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:41.215987   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:41.251314   72639 cri.go:89] found id: ""
	I1014 15:05:41.251341   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.251351   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:41.251355   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:41.251404   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:41.285986   72639 cri.go:89] found id: ""
	I1014 15:05:41.286010   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.286017   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:41.286025   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:41.286071   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:41.323730   72639 cri.go:89] found id: ""
	I1014 15:05:41.323756   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.323764   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:41.323769   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:41.323816   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:41.360787   72639 cri.go:89] found id: ""
	I1014 15:05:41.360817   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.360825   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:41.360834   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:41.360847   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:41.403137   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:41.403172   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:41.459217   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:41.459253   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:41.473529   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:41.473558   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:41.547384   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:41.547405   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:41.547416   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:39.637477   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:42.137176   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:41.569245   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:44.066760   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:42.309212   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:44.310281   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:44.129494   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:44.144061   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:44.144129   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:44.185872   72639 cri.go:89] found id: ""
	I1014 15:05:44.185896   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.185904   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:44.185909   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:44.185955   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:44.222618   72639 cri.go:89] found id: ""
	I1014 15:05:44.222648   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.222658   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:44.222663   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:44.222723   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:44.260730   72639 cri.go:89] found id: ""
	I1014 15:05:44.260761   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.260773   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:44.260780   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:44.260872   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:44.303033   72639 cri.go:89] found id: ""
	I1014 15:05:44.303124   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.303141   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:44.303150   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:44.303223   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:44.344573   72639 cri.go:89] found id: ""
	I1014 15:05:44.344600   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.344609   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:44.344614   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:44.344660   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:44.386091   72639 cri.go:89] found id: ""
	I1014 15:05:44.386122   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.386131   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:44.386137   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:44.386199   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:44.424609   72639 cri.go:89] found id: ""
	I1014 15:05:44.424634   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.424644   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:44.424656   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:44.424724   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:44.463997   72639 cri.go:89] found id: ""
	I1014 15:05:44.464023   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.464033   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:44.464043   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:44.464057   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:44.516883   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:44.516921   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:44.530785   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:44.530820   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:44.605202   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:44.605229   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:44.605245   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:44.685277   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:44.685312   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:47.227851   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:47.242737   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:47.242817   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:47.279395   72639 cri.go:89] found id: ""
	I1014 15:05:47.279421   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.279428   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:47.279434   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:47.279495   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:47.315002   72639 cri.go:89] found id: ""
	I1014 15:05:47.315032   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.315043   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:47.315050   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:47.315120   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:47.354133   72639 cri.go:89] found id: ""
	I1014 15:05:47.354162   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.354173   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:47.354180   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:47.354245   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:47.389394   72639 cri.go:89] found id: ""
	I1014 15:05:47.389419   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.389427   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:47.389439   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:47.389498   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:47.426564   72639 cri.go:89] found id: ""
	I1014 15:05:47.426592   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.426619   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:47.426627   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:47.426676   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:47.466953   72639 cri.go:89] found id: ""
	I1014 15:05:47.466980   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.466989   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:47.466996   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:47.467065   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:47.508563   72639 cri.go:89] found id: ""
	I1014 15:05:47.508595   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.508605   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:47.508613   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:47.508665   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:47.548974   72639 cri.go:89] found id: ""
	I1014 15:05:47.549002   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.549012   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:47.549022   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:47.549036   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:47.604768   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:47.604799   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:47.619681   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:47.619717   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:47.692479   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:47.692506   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:47.692522   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:47.773711   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:47.773751   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:44.637916   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:47.137070   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:46.566472   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:48.566743   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:46.809406   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:48.811359   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:51.309691   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:50.314509   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:50.330883   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:50.330958   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:50.375090   72639 cri.go:89] found id: ""
	I1014 15:05:50.375121   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.375133   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:50.375140   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:50.375201   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:50.415000   72639 cri.go:89] found id: ""
	I1014 15:05:50.415031   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.415041   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:50.415048   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:50.415099   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:50.453937   72639 cri.go:89] found id: ""
	I1014 15:05:50.453967   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.453976   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:50.453983   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:50.454047   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:50.498752   72639 cri.go:89] found id: ""
	I1014 15:05:50.498778   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.498785   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:50.498790   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:50.498858   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:50.537819   72639 cri.go:89] found id: ""
	I1014 15:05:50.537855   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.537864   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:50.537871   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:50.537920   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:50.577141   72639 cri.go:89] found id: ""
	I1014 15:05:50.577168   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.577179   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:50.577186   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:50.577250   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:50.612462   72639 cri.go:89] found id: ""
	I1014 15:05:50.612504   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.612527   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:50.612535   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:50.612597   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:50.648816   72639 cri.go:89] found id: ""
	I1014 15:05:50.648845   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.648855   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:50.648866   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:50.648879   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:50.662546   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:50.662578   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:50.733128   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:50.733152   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:50.733166   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:50.810884   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:50.810913   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:50.855878   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:50.855905   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:49.637103   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:52.137615   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:50.567300   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:53.066883   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:53.810090   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:56.312861   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:53.413608   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:53.428380   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:53.428453   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:53.463440   72639 cri.go:89] found id: ""
	I1014 15:05:53.463464   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.463473   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:53.463479   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:53.463534   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:53.499024   72639 cri.go:89] found id: ""
	I1014 15:05:53.499050   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.499058   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:53.499064   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:53.499121   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:53.534396   72639 cri.go:89] found id: ""
	I1014 15:05:53.534425   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.534435   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:53.534442   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:53.534504   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:53.571396   72639 cri.go:89] found id: ""
	I1014 15:05:53.571422   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.571432   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:53.571439   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:53.571496   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:53.606219   72639 cri.go:89] found id: ""
	I1014 15:05:53.606247   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.606254   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:53.606260   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:53.606309   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:53.644906   72639 cri.go:89] found id: ""
	I1014 15:05:53.644929   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.644938   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:53.644945   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:53.645005   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:53.684764   72639 cri.go:89] found id: ""
	I1014 15:05:53.684795   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.684808   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:53.684817   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:53.684872   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:53.720559   72639 cri.go:89] found id: ""
	I1014 15:05:53.720587   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.720596   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:53.720605   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:53.720626   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:53.773759   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:53.773798   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:53.787688   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:53.787717   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:53.863141   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:53.863163   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:53.863176   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:53.942949   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:53.942989   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:56.487207   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:56.500670   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:56.500730   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:56.533851   72639 cri.go:89] found id: ""
	I1014 15:05:56.533882   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.533894   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:56.533901   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:56.533964   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:56.573169   72639 cri.go:89] found id: ""
	I1014 15:05:56.573194   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.573201   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:56.573207   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:56.573260   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:56.608110   72639 cri.go:89] found id: ""
	I1014 15:05:56.608138   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.608151   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:56.608158   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:56.608218   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:56.646030   72639 cri.go:89] found id: ""
	I1014 15:05:56.646054   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.646061   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:56.646067   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:56.646112   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:56.689427   72639 cri.go:89] found id: ""
	I1014 15:05:56.689455   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.689465   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:56.689473   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:56.689528   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:56.723831   72639 cri.go:89] found id: ""
	I1014 15:05:56.723856   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.723865   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:56.723871   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:56.723928   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:56.756700   72639 cri.go:89] found id: ""
	I1014 15:05:56.756725   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.756734   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:56.756741   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:56.756808   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:56.788201   72639 cri.go:89] found id: ""
	I1014 15:05:56.788228   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.788235   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:56.788242   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:56.788253   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:56.847840   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:56.847876   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:56.861984   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:56.862016   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:56.933190   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:56.933214   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:56.933226   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:57.015909   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:57.015958   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:54.636591   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:56.638712   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:59.137008   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:55.566153   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:57.566963   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:00.067261   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:58.810164   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:00.811078   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:59.559421   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:59.575593   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:59.575673   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:59.611369   72639 cri.go:89] found id: ""
	I1014 15:05:59.611399   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.611409   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:59.611416   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:59.611485   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:59.645786   72639 cri.go:89] found id: ""
	I1014 15:05:59.645817   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.645827   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:59.645834   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:59.645895   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:59.681463   72639 cri.go:89] found id: ""
	I1014 15:05:59.681491   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.681499   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:59.681504   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:59.681553   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:59.723738   72639 cri.go:89] found id: ""
	I1014 15:05:59.723767   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.723775   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:59.723782   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:59.723845   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:59.763890   72639 cri.go:89] found id: ""
	I1014 15:05:59.763919   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.763958   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:59.763966   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:59.764027   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:59.802981   72639 cri.go:89] found id: ""
	I1014 15:05:59.803007   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.803015   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:59.803021   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:59.803074   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:59.841887   72639 cri.go:89] found id: ""
	I1014 15:05:59.841916   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.841927   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:59.841934   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:59.841989   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:59.877190   72639 cri.go:89] found id: ""
	I1014 15:05:59.877221   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.877231   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:59.877240   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:59.877254   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:59.890838   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:59.890864   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:59.970122   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:59.970147   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:59.970163   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:00.058994   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:00.059032   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:00.103227   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:00.103262   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:02.655437   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:02.671240   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:02.671307   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:02.708826   72639 cri.go:89] found id: ""
	I1014 15:06:02.708859   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.708871   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:02.708879   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:02.708943   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:02.744504   72639 cri.go:89] found id: ""
	I1014 15:06:02.744535   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.744546   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:02.744553   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:02.744615   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:02.781144   72639 cri.go:89] found id: ""
	I1014 15:06:02.781180   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.781193   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:02.781201   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:02.781281   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:02.819527   72639 cri.go:89] found id: ""
	I1014 15:06:02.819558   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.819567   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:02.819572   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:02.819630   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:02.855653   72639 cri.go:89] found id: ""
	I1014 15:06:02.855683   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.855693   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:02.855700   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:02.855761   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:02.900843   72639 cri.go:89] found id: ""
	I1014 15:06:02.900876   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.900888   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:02.900896   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:02.900961   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:02.941812   72639 cri.go:89] found id: ""
	I1014 15:06:02.941840   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.941851   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:02.941857   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:02.941919   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:02.980213   72639 cri.go:89] found id: ""
	I1014 15:06:02.980238   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.980246   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:02.980253   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:02.980265   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:00.130683   72173 pod_ready.go:82] duration metric: took 4m0.000550021s for pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace to be "Ready" ...
	E1014 15:06:00.130707   72173 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace to be "Ready" (will not retry!)
	I1014 15:06:00.130723   72173 pod_ready.go:39] duration metric: took 4m13.708579322s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:06:00.130753   72173 kubeadm.go:597] duration metric: took 4m21.979284634s to restartPrimaryControlPlane
	W1014 15:06:00.130836   72173 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1014 15:06:00.130870   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 15:06:02.566183   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:05.066638   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:03.309953   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:05.311484   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:03.034263   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:03.034301   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:03.048574   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:03.048606   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:03.121902   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:03.121925   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:03.121939   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:03.197407   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:03.197445   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:05.737723   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:05.751892   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:05.751959   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:05.789209   72639 cri.go:89] found id: ""
	I1014 15:06:05.789235   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.789242   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:05.789247   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:05.789294   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:05.826189   72639 cri.go:89] found id: ""
	I1014 15:06:05.826220   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.826229   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:05.826236   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:05.826344   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:05.864264   72639 cri.go:89] found id: ""
	I1014 15:06:05.864297   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.864308   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:05.864314   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:05.864371   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:05.899697   72639 cri.go:89] found id: ""
	I1014 15:06:05.899724   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.899732   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:05.899737   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:05.899784   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:05.939552   72639 cri.go:89] found id: ""
	I1014 15:06:05.939583   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.939593   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:05.939601   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:05.939668   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:05.999732   72639 cri.go:89] found id: ""
	I1014 15:06:05.999759   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.999770   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:05.999776   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:05.999834   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:06.036228   72639 cri.go:89] found id: ""
	I1014 15:06:06.036259   72639 logs.go:282] 0 containers: []
	W1014 15:06:06.036276   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:06.036284   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:06.036343   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:06.071744   72639 cri.go:89] found id: ""
	I1014 15:06:06.071774   72639 logs.go:282] 0 containers: []
	W1014 15:06:06.071785   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:06.071795   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:06.071808   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:06.125737   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:06.125774   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:06.139150   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:06.139177   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:06.206731   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:06.206757   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:06.206773   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:06.287183   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:06.287218   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:07.565983   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:10.065897   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:07.809832   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:10.309290   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:08.827345   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:08.841290   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:08.841384   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:08.877789   72639 cri.go:89] found id: ""
	I1014 15:06:08.877815   72639 logs.go:282] 0 containers: []
	W1014 15:06:08.877824   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:08.877832   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:08.877895   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:08.912491   72639 cri.go:89] found id: ""
	I1014 15:06:08.912517   72639 logs.go:282] 0 containers: []
	W1014 15:06:08.912525   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:08.912530   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:08.912586   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:08.948727   72639 cri.go:89] found id: ""
	I1014 15:06:08.948755   72639 logs.go:282] 0 containers: []
	W1014 15:06:08.948765   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:08.948773   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:08.948837   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:08.984397   72639 cri.go:89] found id: ""
	I1014 15:06:08.984428   72639 logs.go:282] 0 containers: []
	W1014 15:06:08.984440   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:08.984448   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:08.984498   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:09.019222   72639 cri.go:89] found id: ""
	I1014 15:06:09.019250   72639 logs.go:282] 0 containers: []
	W1014 15:06:09.019260   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:09.019268   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:09.019329   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:09.058309   72639 cri.go:89] found id: ""
	I1014 15:06:09.058335   72639 logs.go:282] 0 containers: []
	W1014 15:06:09.058346   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:09.058353   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:09.058415   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:09.096508   72639 cri.go:89] found id: ""
	I1014 15:06:09.096535   72639 logs.go:282] 0 containers: []
	W1014 15:06:09.096544   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:09.096550   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:09.096599   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:09.134564   72639 cri.go:89] found id: ""
	I1014 15:06:09.134611   72639 logs.go:282] 0 containers: []
	W1014 15:06:09.134624   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:09.134635   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:09.134647   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:09.188220   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:09.188254   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:09.203119   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:09.203149   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:09.279357   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:09.279379   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:09.279390   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:09.364219   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:09.364253   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:11.910976   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:11.926067   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:11.926149   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:11.966238   72639 cri.go:89] found id: ""
	I1014 15:06:11.966271   72639 logs.go:282] 0 containers: []
	W1014 15:06:11.966282   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:11.966289   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:11.966350   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:12.002580   72639 cri.go:89] found id: ""
	I1014 15:06:12.002617   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.002630   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:12.002637   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:12.002698   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:12.037014   72639 cri.go:89] found id: ""
	I1014 15:06:12.037037   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.037046   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:12.037051   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:12.037111   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:12.070937   72639 cri.go:89] found id: ""
	I1014 15:06:12.070957   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.070965   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:12.070970   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:12.071019   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:12.104920   72639 cri.go:89] found id: ""
	I1014 15:06:12.104949   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.104960   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:12.104967   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:12.105026   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:12.142498   72639 cri.go:89] found id: ""
	I1014 15:06:12.142530   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.142544   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:12.142555   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:12.142628   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:12.179590   72639 cri.go:89] found id: ""
	I1014 15:06:12.179613   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.179621   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:12.179627   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:12.179675   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:12.213947   72639 cri.go:89] found id: ""
	I1014 15:06:12.213973   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.213981   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:12.213989   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:12.213998   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:12.268214   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:12.268257   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:12.283561   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:12.283594   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:12.382344   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:12.382367   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:12.382377   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:12.469818   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:12.469854   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:12.066154   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:14.565962   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:12.310167   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:14.810273   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:15.011529   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:15.025355   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:15.025423   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:15.060996   72639 cri.go:89] found id: ""
	I1014 15:06:15.061028   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.061040   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:15.061047   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:15.061120   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:15.103050   72639 cri.go:89] found id: ""
	I1014 15:06:15.103074   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.103082   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:15.103088   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:15.103140   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:15.140095   72639 cri.go:89] found id: ""
	I1014 15:06:15.140122   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.140132   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:15.140139   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:15.140207   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:15.174612   72639 cri.go:89] found id: ""
	I1014 15:06:15.174642   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.174654   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:15.174669   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:15.174737   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:15.209116   72639 cri.go:89] found id: ""
	I1014 15:06:15.209142   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.209152   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:15.209160   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:15.209221   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:15.242857   72639 cri.go:89] found id: ""
	I1014 15:06:15.242885   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.242896   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:15.242902   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:15.242966   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:15.283038   72639 cri.go:89] found id: ""
	I1014 15:06:15.283066   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.283076   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:15.283083   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:15.283144   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:15.319577   72639 cri.go:89] found id: ""
	I1014 15:06:15.319604   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.319612   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:15.319622   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:15.319636   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:15.391485   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:15.391506   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:15.391520   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:15.470140   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:15.470192   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:15.513098   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:15.513132   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:15.568275   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:15.568305   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:17.065956   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:19.566207   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:17.308463   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:19.309185   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:21.310841   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:18.085915   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:18.113889   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:18.113958   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:18.167486   72639 cri.go:89] found id: ""
	I1014 15:06:18.167511   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.167519   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:18.167525   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:18.167568   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:18.230244   72639 cri.go:89] found id: ""
	I1014 15:06:18.230273   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.230283   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:18.230291   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:18.230351   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:18.264223   72639 cri.go:89] found id: ""
	I1014 15:06:18.264252   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.264261   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:18.264268   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:18.264332   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:18.298719   72639 cri.go:89] found id: ""
	I1014 15:06:18.298750   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.298762   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:18.298770   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:18.298843   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:18.335113   72639 cri.go:89] found id: ""
	I1014 15:06:18.335140   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.335147   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:18.335153   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:18.335212   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:18.373690   72639 cri.go:89] found id: ""
	I1014 15:06:18.373721   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.373736   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:18.373743   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:18.373792   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:18.411138   72639 cri.go:89] found id: ""
	I1014 15:06:18.411171   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.411182   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:18.411190   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:18.411250   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:18.451281   72639 cri.go:89] found id: ""
	I1014 15:06:18.451306   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.451314   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:18.451323   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:18.451334   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:18.502141   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:18.502178   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:18.517449   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:18.517476   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:18.586737   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:18.586760   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:18.586776   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:18.670234   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:18.670270   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:21.210200   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:21.222998   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:21.223053   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:21.257132   72639 cri.go:89] found id: ""
	I1014 15:06:21.257160   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.257167   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:21.257174   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:21.257237   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:21.290905   72639 cri.go:89] found id: ""
	I1014 15:06:21.290933   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.290945   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:21.290952   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:21.291007   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:21.331067   72639 cri.go:89] found id: ""
	I1014 15:06:21.331098   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.331108   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:21.331128   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:21.331178   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:21.370042   72639 cri.go:89] found id: ""
	I1014 15:06:21.370069   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.370077   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:21.370083   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:21.370141   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:21.414900   72639 cri.go:89] found id: ""
	I1014 15:06:21.414920   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.414932   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:21.414938   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:21.414985   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:21.452914   72639 cri.go:89] found id: ""
	I1014 15:06:21.452941   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.452952   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:21.452960   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:21.453022   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:21.486725   72639 cri.go:89] found id: ""
	I1014 15:06:21.486752   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.486763   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:21.486770   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:21.486831   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:21.524012   72639 cri.go:89] found id: ""
	I1014 15:06:21.524034   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.524042   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:21.524049   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:21.524059   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:21.603238   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:21.603279   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:21.645655   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:21.645689   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:21.701053   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:21.701092   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:21.715515   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:21.715542   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:21.781831   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:22.067051   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:24.567173   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:21.810342   72390 pod_ready.go:82] duration metric: took 4m0.007657098s for pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace to be "Ready" ...
	E1014 15:06:21.810365   72390 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1014 15:06:21.810382   72390 pod_ready.go:39] duration metric: took 4m7.92113061s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:06:21.810401   72390 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:06:21.810433   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:21.810488   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:21.856565   72390 cri.go:89] found id: "a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f"
	I1014 15:06:21.856587   72390 cri.go:89] found id: ""
	I1014 15:06:21.856594   72390 logs.go:282] 1 containers: [a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f]
	I1014 15:06:21.856654   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:21.861036   72390 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:21.861091   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:21.898486   72390 cri.go:89] found id: "0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69"
	I1014 15:06:21.898517   72390 cri.go:89] found id: ""
	I1014 15:06:21.898528   72390 logs.go:282] 1 containers: [0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69]
	I1014 15:06:21.898587   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:21.903145   72390 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:21.903245   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:21.941127   72390 cri.go:89] found id: "6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1"
	I1014 15:06:21.941164   72390 cri.go:89] found id: ""
	I1014 15:06:21.941173   72390 logs.go:282] 1 containers: [6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1]
	I1014 15:06:21.941232   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:21.945584   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:21.945658   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:21.994370   72390 cri.go:89] found id: "be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa"
	I1014 15:06:21.994398   72390 cri.go:89] found id: ""
	I1014 15:06:21.994407   72390 logs.go:282] 1 containers: [be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa]
	I1014 15:06:21.994454   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:21.998498   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:21.998547   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:22.037415   72390 cri.go:89] found id: "8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42"
	I1014 15:06:22.037443   72390 cri.go:89] found id: ""
	I1014 15:06:22.037453   72390 logs.go:282] 1 containers: [8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42]
	I1014 15:06:22.037507   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:22.041882   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:22.041947   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:22.079219   72390 cri.go:89] found id: "7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4"
	I1014 15:06:22.079243   72390 cri.go:89] found id: ""
	I1014 15:06:22.079252   72390 logs.go:282] 1 containers: [7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4]
	I1014 15:06:22.079319   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:22.083373   72390 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:22.083432   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:22.120795   72390 cri.go:89] found id: ""
	I1014 15:06:22.120818   72390 logs.go:282] 0 containers: []
	W1014 15:06:22.120825   72390 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:22.120832   72390 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1014 15:06:22.120889   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1014 15:06:22.158545   72390 cri.go:89] found id: "54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81"
	I1014 15:06:22.158571   72390 cri.go:89] found id: "48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076"
	I1014 15:06:22.158577   72390 cri.go:89] found id: ""
	I1014 15:06:22.158586   72390 logs.go:282] 2 containers: [54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81 48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076]
	I1014 15:06:22.158662   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:22.162500   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:22.166734   72390 logs.go:123] Gathering logs for storage-provisioner [48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076] ...
	I1014 15:06:22.166759   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076"
	I1014 15:06:22.202711   72390 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:22.202736   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:22.279594   72390 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:22.279635   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:22.293836   72390 logs.go:123] Gathering logs for coredns [6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1] ...
	I1014 15:06:22.293863   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1"
	I1014 15:06:22.335451   72390 logs.go:123] Gathering logs for kube-scheduler [be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa] ...
	I1014 15:06:22.335478   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa"
	I1014 15:06:22.374244   72390 logs.go:123] Gathering logs for kube-proxy [8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42] ...
	I1014 15:06:22.374274   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42"
	I1014 15:06:22.422538   72390 logs.go:123] Gathering logs for kube-controller-manager [7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4] ...
	I1014 15:06:22.422567   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4"
	I1014 15:06:22.486973   72390 logs.go:123] Gathering logs for storage-provisioner [54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81] ...
	I1014 15:06:22.487009   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81"
	I1014 15:06:22.528871   72390 logs.go:123] Gathering logs for container status ...
	I1014 15:06:22.528899   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:22.575947   72390 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:22.575982   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 15:06:22.713356   72390 logs.go:123] Gathering logs for kube-apiserver [a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f] ...
	I1014 15:06:22.713387   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f"
	I1014 15:06:22.760315   72390 logs.go:123] Gathering logs for etcd [0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69] ...
	I1014 15:06:22.760348   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69"
	I1014 15:06:22.811144   72390 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:22.811169   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:25.780847   72390 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:25.800698   72390 api_server.go:72] duration metric: took 4m18.640749756s to wait for apiserver process to appear ...
	I1014 15:06:25.800733   72390 api_server.go:88] waiting for apiserver healthz status ...
	I1014 15:06:25.800779   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:25.800845   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:25.841159   72390 cri.go:89] found id: "a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f"
	I1014 15:06:25.841193   72390 cri.go:89] found id: ""
	I1014 15:06:25.841203   72390 logs.go:282] 1 containers: [a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f]
	I1014 15:06:25.841259   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:25.845503   72390 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:25.845560   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:25.884122   72390 cri.go:89] found id: "0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69"
	I1014 15:06:25.884151   72390 cri.go:89] found id: ""
	I1014 15:06:25.884161   72390 logs.go:282] 1 containers: [0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69]
	I1014 15:06:25.884223   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:25.889638   72390 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:25.889700   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:25.931199   72390 cri.go:89] found id: "6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1"
	I1014 15:06:25.931220   72390 cri.go:89] found id: ""
	I1014 15:06:25.931230   72390 logs.go:282] 1 containers: [6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1]
	I1014 15:06:25.931285   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:25.936063   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:25.936127   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:25.979162   72390 cri.go:89] found id: "be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa"
	I1014 15:06:25.979188   72390 cri.go:89] found id: ""
	I1014 15:06:25.979197   72390 logs.go:282] 1 containers: [be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa]
	I1014 15:06:25.979254   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:25.983550   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:25.983611   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:26.021835   72390 cri.go:89] found id: "8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42"
	I1014 15:06:26.021854   72390 cri.go:89] found id: ""
	I1014 15:06:26.021862   72390 logs.go:282] 1 containers: [8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42]
	I1014 15:06:26.021911   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:26.026005   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:26.026073   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:26.067719   72390 cri.go:89] found id: "7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4"
	I1014 15:06:26.067740   72390 cri.go:89] found id: ""
	I1014 15:06:26.067749   72390 logs.go:282] 1 containers: [7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4]
	I1014 15:06:26.067803   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:26.073387   72390 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:26.073453   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:26.116305   72390 cri.go:89] found id: ""
	I1014 15:06:26.116336   72390 logs.go:282] 0 containers: []
	W1014 15:06:26.116349   72390 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:26.116358   72390 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1014 15:06:26.116427   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1014 15:06:26.156959   72390 cri.go:89] found id: "54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81"
	I1014 15:06:26.156985   72390 cri.go:89] found id: "48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076"
	I1014 15:06:26.156991   72390 cri.go:89] found id: ""
	I1014 15:06:26.156999   72390 logs.go:282] 2 containers: [54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81 48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076]
	I1014 15:06:26.157051   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:26.161437   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:26.165696   72390 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:26.165718   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 15:06:26.282026   72390 logs.go:123] Gathering logs for coredns [6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1] ...
	I1014 15:06:26.282056   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1"
	I1014 15:06:26.333504   72390 logs.go:123] Gathering logs for kube-scheduler [be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa] ...
	I1014 15:06:26.333543   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa"
	I1014 15:06:26.376435   72390 logs.go:123] Gathering logs for storage-provisioner [54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81] ...
	I1014 15:06:26.376469   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81"
	I1014 15:06:26.416633   72390 logs.go:123] Gathering logs for storage-provisioner [48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076] ...
	I1014 15:06:26.416660   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076"
	I1014 15:06:26.388546   72173 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.257645941s)
	I1014 15:06:26.388631   72173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:06:26.407118   72173 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:06:26.417718   72173 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:06:26.428364   72173 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:06:26.428391   72173 kubeadm.go:157] found existing configuration files:
	
	I1014 15:06:26.428451   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:06:26.437953   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:06:26.438026   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:06:26.448356   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:06:26.458476   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:06:26.458541   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:06:26.469941   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:06:26.482934   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:06:26.483016   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:06:26.495682   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:06:26.506113   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:06:26.506176   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:06:26.517784   72173 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 15:06:26.568927   72173 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1014 15:06:26.568978   72173 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 15:06:26.685727   72173 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 15:06:26.685855   72173 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 15:06:26.685963   72173 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 15:06:26.693948   72173 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 15:06:26.696177   72173 out.go:235]   - Generating certificates and keys ...
	I1014 15:06:26.696269   72173 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 15:06:26.696318   72173 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 15:06:26.696388   72173 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 15:06:26.696438   72173 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1014 15:06:26.696495   72173 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 15:06:26.696536   72173 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1014 15:06:26.696588   72173 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1014 15:06:26.696639   72173 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1014 15:06:26.696696   72173 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 15:06:26.696760   72173 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 15:06:26.700275   72173 kubeadm.go:310] [certs] Using the existing "sa" key
	I1014 15:06:26.700406   72173 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 15:06:26.831734   72173 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 15:06:27.336318   72173 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 15:06:27.574604   72173 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 15:06:27.681370   72173 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 15:06:27.788769   72173 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 15:06:27.789324   72173 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 15:06:27.791842   72173 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 15:06:24.282018   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:24.295177   72639 kubeadm.go:597] duration metric: took 4m4.450514459s to restartPrimaryControlPlane
	W1014 15:06:24.295255   72639 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1014 15:06:24.295283   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 15:06:27.793786   72173 out.go:235]   - Booting up control plane ...
	I1014 15:06:27.793891   72173 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 15:06:27.793980   72173 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 15:06:27.794089   72173 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 15:06:27.815223   72173 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 15:06:27.821764   72173 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 15:06:27.821817   72173 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 15:06:27.965327   72173 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 15:06:27.965707   72173 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 15:06:28.967332   72173 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001260991s
	I1014 15:06:28.967473   72173 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1014 15:06:29.238014   72639 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.942706631s)
	I1014 15:06:29.238096   72639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:06:29.258804   72639 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:06:29.269440   72639 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:06:29.279613   72639 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:06:29.279633   72639 kubeadm.go:157] found existing configuration files:
	
	I1014 15:06:29.279672   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:06:29.292840   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:06:29.292912   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:06:29.306987   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:06:29.319896   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:06:29.319970   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:06:29.333974   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:06:29.343993   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:06:29.344051   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:06:29.354691   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:06:29.364354   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:06:29.364422   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:06:29.374674   72639 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 15:06:29.452845   72639 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1014 15:06:29.452961   72639 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 15:06:29.618263   72639 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 15:06:29.618446   72639 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 15:06:29.618582   72639 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1014 15:06:29.813387   72639 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 15:06:29.815501   72639 out.go:235]   - Generating certificates and keys ...
	I1014 15:06:29.815610   72639 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 15:06:29.815697   72639 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 15:06:29.815799   72639 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 15:06:29.815879   72639 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1014 15:06:29.815971   72639 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 15:06:29.816039   72639 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1014 15:06:29.816125   72639 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1014 15:06:29.816206   72639 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1014 15:06:29.816307   72639 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 15:06:29.816404   72639 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 15:06:29.816454   72639 kubeadm.go:310] [certs] Using the existing "sa" key
	I1014 15:06:29.816531   72639 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 15:06:29.944505   72639 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 15:06:30.106467   72639 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 15:06:30.226356   72639 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 15:06:30.322169   72639 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 15:06:30.342382   72639 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 15:06:30.343666   72639 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 15:06:30.343736   72639 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 15:06:30.507000   72639 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 15:06:27.066923   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:29.068434   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:26.453659   72390 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:26.453693   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:26.900485   72390 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:26.900518   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:26.925431   72390 logs.go:123] Gathering logs for kube-apiserver [a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f] ...
	I1014 15:06:26.925461   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f"
	I1014 15:06:26.986104   72390 logs.go:123] Gathering logs for etcd [0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69] ...
	I1014 15:06:26.986140   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69"
	I1014 15:06:27.037557   72390 logs.go:123] Gathering logs for kube-proxy [8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42] ...
	I1014 15:06:27.037600   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42"
	I1014 15:06:27.084362   72390 logs.go:123] Gathering logs for kube-controller-manager [7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4] ...
	I1014 15:06:27.084397   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4"
	I1014 15:06:27.138680   72390 logs.go:123] Gathering logs for container status ...
	I1014 15:06:27.138713   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:27.191283   72390 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:27.191314   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:29.761781   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:06:29.769020   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 200:
	ok
	I1014 15:06:29.770210   72390 api_server.go:141] control plane version: v1.31.1
	I1014 15:06:29.770232   72390 api_server.go:131] duration metric: took 3.969490314s to wait for apiserver health ...
	I1014 15:06:29.770242   72390 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 15:06:29.770268   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:29.770328   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:29.827908   72390 cri.go:89] found id: "a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f"
	I1014 15:06:29.827930   72390 cri.go:89] found id: ""
	I1014 15:06:29.827939   72390 logs.go:282] 1 containers: [a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f]
	I1014 15:06:29.827994   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:29.837786   72390 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:29.837864   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:29.877625   72390 cri.go:89] found id: "0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69"
	I1014 15:06:29.877661   72390 cri.go:89] found id: ""
	I1014 15:06:29.877672   72390 logs.go:282] 1 containers: [0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69]
	I1014 15:06:29.877738   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:29.882502   72390 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:29.882578   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:29.923002   72390 cri.go:89] found id: "6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1"
	I1014 15:06:29.923027   72390 cri.go:89] found id: ""
	I1014 15:06:29.923037   72390 logs.go:282] 1 containers: [6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1]
	I1014 15:06:29.923094   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:29.927559   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:29.927621   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:29.966098   72390 cri.go:89] found id: "be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa"
	I1014 15:06:29.966124   72390 cri.go:89] found id: ""
	I1014 15:06:29.966133   72390 logs.go:282] 1 containers: [be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa]
	I1014 15:06:29.966189   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:29.972287   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:29.972371   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:30.024389   72390 cri.go:89] found id: "8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42"
	I1014 15:06:30.024414   72390 cri.go:89] found id: ""
	I1014 15:06:30.024423   72390 logs.go:282] 1 containers: [8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42]
	I1014 15:06:30.024481   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:30.029914   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:30.029976   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:30.085703   72390 cri.go:89] found id: "7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4"
	I1014 15:06:30.085727   72390 cri.go:89] found id: ""
	I1014 15:06:30.085737   72390 logs.go:282] 1 containers: [7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4]
	I1014 15:06:30.085806   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:30.097004   72390 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:30.097098   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:30.147464   72390 cri.go:89] found id: ""
	I1014 15:06:30.147494   72390 logs.go:282] 0 containers: []
	W1014 15:06:30.147505   72390 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:30.147512   72390 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1014 15:06:30.147573   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1014 15:06:30.195003   72390 cri.go:89] found id: "54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81"
	I1014 15:06:30.195030   72390 cri.go:89] found id: "48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076"
	I1014 15:06:30.195036   72390 cri.go:89] found id: ""
	I1014 15:06:30.195045   72390 logs.go:282] 2 containers: [54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81 48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076]
	I1014 15:06:30.195099   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:30.199436   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:30.204079   72390 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:30.204105   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:30.221021   72390 logs.go:123] Gathering logs for kube-apiserver [a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f] ...
	I1014 15:06:30.221049   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f"
	I1014 15:06:30.280979   72390 logs.go:123] Gathering logs for coredns [6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1] ...
	I1014 15:06:30.281013   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1"
	I1014 15:06:30.339261   72390 logs.go:123] Gathering logs for kube-proxy [8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42] ...
	I1014 15:06:30.339291   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42"
	I1014 15:06:30.390034   72390 logs.go:123] Gathering logs for kube-controller-manager [7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4] ...
	I1014 15:06:30.390081   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4"
	I1014 15:06:30.461221   72390 logs.go:123] Gathering logs for storage-provisioner [54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81] ...
	I1014 15:06:30.461262   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81"
	I1014 15:06:30.504100   72390 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:30.504134   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:30.870561   72390 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:30.870629   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:30.942952   72390 logs.go:123] Gathering logs for container status ...
	I1014 15:06:30.942998   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:30.995435   72390 logs.go:123] Gathering logs for etcd [0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69] ...
	I1014 15:06:30.995484   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69"
	I1014 15:06:31.038804   72390 logs.go:123] Gathering logs for kube-scheduler [be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa] ...
	I1014 15:06:31.038839   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa"
	I1014 15:06:31.080187   72390 logs.go:123] Gathering logs for storage-provisioner [48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076] ...
	I1014 15:06:31.080218   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076"
	I1014 15:06:31.122248   72390 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:31.122295   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 15:06:30.509157   72639 out.go:235]   - Booting up control plane ...
	I1014 15:06:30.509293   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 15:06:30.518440   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 15:06:30.520572   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 15:06:30.522337   72639 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 15:06:30.524996   72639 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1014 15:06:33.742510   72390 system_pods.go:59] 8 kube-system pods found
	I1014 15:06:33.742539   72390 system_pods.go:61] "coredns-7c65d6cfc9-994hx" [b0291ce4-5503-4bb1-8e36-d956b115c3ac] Running
	I1014 15:06:33.742546   72390 system_pods.go:61] "etcd-default-k8s-diff-port-201291" [5e359915-fb2e-46d5-a1a8-826341943fc3] Running
	I1014 15:06:33.742552   72390 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-201291" [047bd813-aaab-428e-ab47-12932195c91f] Running
	I1014 15:06:33.742557   72390 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-201291" [6eb0eb91-21ce-4e56-9758-fbd453b0d4df] Running
	I1014 15:06:33.742562   72390 system_pods.go:61] "kube-proxy-rh82t" [1dcd3c39-1bfe-40ac-a012-ea17ea1dfb6d] Running
	I1014 15:06:33.742566   72390 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-201291" [aaeefd23-6adc-4c69-acca-38e3f3172b2e] Running
	I1014 15:06:33.742576   72390 system_pods.go:61] "metrics-server-6867b74b74-bcrqs" [508697cd-cf31-4078-8985-5c0b77966695] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:06:33.742582   72390 system_pods.go:61] "storage-provisioner" [62925b5e-ec1d-4d5b-aa70-a4fc555db52d] Running
	I1014 15:06:33.742615   72390 system_pods.go:74] duration metric: took 3.972347536s to wait for pod list to return data ...
	I1014 15:06:33.742628   72390 default_sa.go:34] waiting for default service account to be created ...
	I1014 15:06:33.744532   72390 default_sa.go:45] found service account: "default"
	I1014 15:06:33.744551   72390 default_sa.go:55] duration metric: took 1.918153ms for default service account to be created ...
	I1014 15:06:33.744558   72390 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 15:06:33.750292   72390 system_pods.go:86] 8 kube-system pods found
	I1014 15:06:33.750315   72390 system_pods.go:89] "coredns-7c65d6cfc9-994hx" [b0291ce4-5503-4bb1-8e36-d956b115c3ac] Running
	I1014 15:06:33.750320   72390 system_pods.go:89] "etcd-default-k8s-diff-port-201291" [5e359915-fb2e-46d5-a1a8-826341943fc3] Running
	I1014 15:06:33.750324   72390 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-201291" [047bd813-aaab-428e-ab47-12932195c91f] Running
	I1014 15:06:33.750329   72390 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-201291" [6eb0eb91-21ce-4e56-9758-fbd453b0d4df] Running
	I1014 15:06:33.750332   72390 system_pods.go:89] "kube-proxy-rh82t" [1dcd3c39-1bfe-40ac-a012-ea17ea1dfb6d] Running
	I1014 15:06:33.750335   72390 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-201291" [aaeefd23-6adc-4c69-acca-38e3f3172b2e] Running
	I1014 15:06:33.750341   72390 system_pods.go:89] "metrics-server-6867b74b74-bcrqs" [508697cd-cf31-4078-8985-5c0b77966695] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:06:33.750346   72390 system_pods.go:89] "storage-provisioner" [62925b5e-ec1d-4d5b-aa70-a4fc555db52d] Running
	I1014 15:06:33.750352   72390 system_pods.go:126] duration metric: took 5.790549ms to wait for k8s-apps to be running ...
	I1014 15:06:33.750358   72390 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 15:06:33.750398   72390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:06:33.770342   72390 system_svc.go:56] duration metric: took 19.978034ms WaitForService to wait for kubelet
	I1014 15:06:33.770370   72390 kubeadm.go:582] duration metric: took 4m26.610427104s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 15:06:33.770392   72390 node_conditions.go:102] verifying NodePressure condition ...
	I1014 15:06:33.774149   72390 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 15:06:33.774176   72390 node_conditions.go:123] node cpu capacity is 2
	I1014 15:06:33.774190   72390 node_conditions.go:105] duration metric: took 3.792746ms to run NodePressure ...
	I1014 15:06:33.774203   72390 start.go:241] waiting for startup goroutines ...
	I1014 15:06:33.774217   72390 start.go:246] waiting for cluster config update ...
	I1014 15:06:33.774232   72390 start.go:255] writing updated cluster config ...
	I1014 15:06:33.774560   72390 ssh_runner.go:195] Run: rm -f paused
	I1014 15:06:33.823879   72390 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1014 15:06:33.825962   72390 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-201291" cluster and "default" namespace by default
	I1014 15:06:33.976430   72173 kubeadm.go:310] [api-check] The API server is healthy after 5.00773575s
	I1014 15:06:33.990496   72173 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 15:06:34.010821   72173 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 15:06:34.051244   72173 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 15:06:34.051513   72173 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-989166 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 15:06:34.066447   72173 kubeadm.go:310] [bootstrap-token] Using token: 46olqw.t0lfd7bmyz0olhbh
	I1014 15:06:34.067925   72173 out.go:235]   - Configuring RBAC rules ...
	I1014 15:06:34.068073   72173 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 15:06:34.077775   72173 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 15:06:34.097676   72173 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 15:06:34.103212   72173 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 15:06:34.112640   72173 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 15:06:34.119886   72173 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 15:06:34.382372   72173 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 15:06:34.825514   72173 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1014 15:06:35.383856   72173 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1014 15:06:35.383877   72173 kubeadm.go:310] 
	I1014 15:06:35.383939   72173 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1014 15:06:35.383976   72173 kubeadm.go:310] 
	I1014 15:06:35.384094   72173 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1014 15:06:35.384103   72173 kubeadm.go:310] 
	I1014 15:06:35.384136   72173 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1014 15:06:35.384223   72173 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 15:06:35.384286   72173 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 15:06:35.384311   72173 kubeadm.go:310] 
	I1014 15:06:35.384414   72173 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1014 15:06:35.384430   72173 kubeadm.go:310] 
	I1014 15:06:35.384499   72173 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 15:06:35.384512   72173 kubeadm.go:310] 
	I1014 15:06:35.384597   72173 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1014 15:06:35.384685   72173 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 15:06:35.384744   72173 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 15:06:35.384750   72173 kubeadm.go:310] 
	I1014 15:06:35.384821   72173 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 15:06:35.384928   72173 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1014 15:06:35.384940   72173 kubeadm.go:310] 
	I1014 15:06:35.385047   72173 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 46olqw.t0lfd7bmyz0olhbh \
	I1014 15:06:35.385192   72173 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 \
	I1014 15:06:35.385224   72173 kubeadm.go:310] 	--control-plane 
	I1014 15:06:35.385231   72173 kubeadm.go:310] 
	I1014 15:06:35.385322   72173 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1014 15:06:35.385334   72173 kubeadm.go:310] 
	I1014 15:06:35.385449   72173 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 46olqw.t0lfd7bmyz0olhbh \
	I1014 15:06:35.385588   72173 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 
	I1014 15:06:35.386604   72173 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 15:06:35.386674   72173 cni.go:84] Creating CNI manager for ""
	I1014 15:06:35.386689   72173 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:06:35.388617   72173 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 15:06:31.069009   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:33.565864   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:35.390017   72173 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 15:06:35.402242   72173 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 15:06:35.428958   72173 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 15:06:35.429016   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:35.429080   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-989166 minikube.k8s.io/updated_at=2024_10_14T15_06_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=embed-certs-989166 minikube.k8s.io/primary=true
	I1014 15:06:35.475775   72173 ops.go:34] apiserver oom_adj: -16
	I1014 15:06:35.645234   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:36.145613   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:36.646197   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:37.145401   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:37.645956   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:38.145978   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:38.645292   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:39.145444   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:39.646019   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:39.869659   72173 kubeadm.go:1113] duration metric: took 4.440701402s to wait for elevateKubeSystemPrivileges
	I1014 15:06:39.869695   72173 kubeadm.go:394] duration metric: took 5m1.76989803s to StartCluster
	I1014 15:06:39.869713   72173 settings.go:142] acquiring lock: {Name:mk9f6f6b9dc8c3435472077cbb9091b8e648d1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:06:39.869797   72173 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 15:06:39.872564   72173 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/kubeconfig: {Name:mk17dd47b52fd7a8ee17f563c35b22ef1b7788f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:06:39.872947   72173 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 15:06:39.873165   72173 config.go:182] Loaded profile config "embed-certs-989166": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 15:06:39.873085   72173 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 15:06:39.873246   72173 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-989166"
	I1014 15:06:39.873256   72173 addons.go:69] Setting metrics-server=true in profile "embed-certs-989166"
	I1014 15:06:39.873273   72173 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-989166"
	I1014 15:06:39.873272   72173 addons.go:69] Setting default-storageclass=true in profile "embed-certs-989166"
	I1014 15:06:39.873319   72173 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-989166"
	W1014 15:06:39.873282   72173 addons.go:243] addon storage-provisioner should already be in state true
	I1014 15:06:39.873417   72173 host.go:66] Checking if "embed-certs-989166" exists ...
	I1014 15:06:39.873282   72173 addons.go:234] Setting addon metrics-server=true in "embed-certs-989166"
	W1014 15:06:39.873476   72173 addons.go:243] addon metrics-server should already be in state true
	I1014 15:06:39.873504   72173 host.go:66] Checking if "embed-certs-989166" exists ...
	I1014 15:06:39.873875   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.873888   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.873920   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.873947   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.873986   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.874050   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.874921   72173 out.go:177] * Verifying Kubernetes components...
	I1014 15:06:39.876972   72173 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:06:39.893341   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41669
	I1014 15:06:39.893367   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41843
	I1014 15:06:39.893341   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39139
	I1014 15:06:39.893905   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.893915   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.894023   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.894471   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.894493   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.894651   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.894677   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.894713   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.894731   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.894942   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.895073   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.895563   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.895593   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.895778   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.895970   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetState
	I1014 15:06:39.896249   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.896293   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.899661   72173 addons.go:234] Setting addon default-storageclass=true in "embed-certs-989166"
	W1014 15:06:39.899685   72173 addons.go:243] addon default-storageclass should already be in state true
	I1014 15:06:39.899714   72173 host.go:66] Checking if "embed-certs-989166" exists ...
	I1014 15:06:39.900088   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.900131   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.912591   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39805
	I1014 15:06:39.913089   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.913630   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.913652   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.914099   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.914287   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetState
	I1014 15:06:39.914839   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39111
	I1014 15:06:39.915288   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.915783   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.915802   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.916147   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.916171   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:06:39.916382   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetState
	I1014 15:06:39.917766   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:06:39.917796   72173 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:06:39.919192   72173 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1014 15:06:35.567508   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:38.065792   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:40.066618   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:39.919297   72173 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 15:06:39.919320   72173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 15:06:39.919339   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:06:39.920468   72173 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1014 15:06:39.920489   72173 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1014 15:06:39.920507   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:06:39.921603   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45255
	I1014 15:06:39.921970   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.922502   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.922525   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.922994   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.923333   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:06:39.923585   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.923627   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.923826   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:06:39.923846   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:06:39.923876   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:06:39.924028   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:06:39.924157   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:06:39.924270   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:06:39.924291   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:06:39.924310   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:06:39.924397   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:06:39.924674   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:06:39.924840   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:06:39.925027   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:06:39.925201   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:06:39.945435   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40911
	I1014 15:06:39.945958   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.946468   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.946497   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.946855   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.947023   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetState
	I1014 15:06:39.948734   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:06:39.948924   72173 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 15:06:39.948942   72173 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 15:06:39.948966   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:06:39.951019   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:06:39.951418   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:06:39.951437   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:06:39.951570   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:06:39.951742   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:06:39.951918   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:06:39.952058   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:06:40.129893   72173 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:06:40.215427   72173 node_ready.go:35] waiting up to 6m0s for node "embed-certs-989166" to be "Ready" ...
	I1014 15:06:40.224710   72173 node_ready.go:49] node "embed-certs-989166" has status "Ready":"True"
	I1014 15:06:40.224731   72173 node_ready.go:38] duration metric: took 9.266994ms for node "embed-certs-989166" to be "Ready" ...
	I1014 15:06:40.224742   72173 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:06:40.230651   72173 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:40.394829   72173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 15:06:40.422573   72173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 15:06:40.430300   72173 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1014 15:06:40.430319   72173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1014 15:06:40.503826   72173 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1014 15:06:40.503857   72173 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1014 15:06:40.586087   72173 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 15:06:40.586116   72173 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1014 15:06:40.726605   72173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 15:06:40.887453   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:40.887475   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:40.887809   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Closing plugin on server side
	I1014 15:06:40.887857   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:40.887869   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:40.887886   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:40.887898   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:40.888127   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:40.888150   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:40.888160   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Closing plugin on server side
	I1014 15:06:40.901694   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:40.901717   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:40.902091   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:40.902103   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Closing plugin on server side
	I1014 15:06:40.902111   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:41.352636   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:41.352670   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:41.352963   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Closing plugin on server side
	I1014 15:06:41.353017   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:41.353029   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:41.353036   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:41.353043   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:41.353274   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:41.353302   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:41.578200   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:41.578219   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:41.578484   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:41.578529   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:41.578554   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:41.578588   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:41.578827   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:41.578844   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:41.578854   72173 addons.go:475] Verifying addon metrics-server=true in "embed-certs-989166"
	I1014 15:06:41.581312   72173 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1014 15:06:41.582506   72173 addons.go:510] duration metric: took 1.709432803s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1014 15:06:42.237265   72173 pod_ready.go:103] pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:44.240605   72173 pod_ready.go:103] pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:42.067701   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:44.566134   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:46.738094   72173 pod_ready.go:103] pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:48.739238   72173 pod_ready.go:103] pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:49.238145   72173 pod_ready.go:93] pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:49.238167   72173 pod_ready.go:82] duration metric: took 9.007493385s for pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.238176   72173 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-l95hj" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.243268   72173 pod_ready.go:93] pod "coredns-7c65d6cfc9-l95hj" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:49.243299   72173 pod_ready.go:82] duration metric: took 5.116183ms for pod "coredns-7c65d6cfc9-l95hj" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.243311   72173 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.247979   72173 pod_ready.go:93] pod "etcd-embed-certs-989166" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:49.248001   72173 pod_ready.go:82] duration metric: took 4.682826ms for pod "etcd-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.248009   72173 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.252590   72173 pod_ready.go:93] pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:49.252615   72173 pod_ready.go:82] duration metric: took 4.599399ms for pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.252624   72173 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.257541   72173 pod_ready.go:93] pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:49.257566   72173 pod_ready.go:82] duration metric: took 4.935116ms for pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.257575   72173 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g572s" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:47.064934   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:49.066284   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:49.635873   72173 pod_ready.go:93] pod "kube-proxy-g572s" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:49.635895   72173 pod_ready.go:82] duration metric: took 378.313947ms for pod "kube-proxy-g572s" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.635904   72173 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:50.035141   72173 pod_ready.go:93] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:50.035169   72173 pod_ready.go:82] duration metric: took 399.257073ms for pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:50.035179   72173 pod_ready.go:39] duration metric: took 9.810424567s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:06:50.035195   72173 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:06:50.035258   72173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:50.054964   72173 api_server.go:72] duration metric: took 10.181978114s to wait for apiserver process to appear ...
	I1014 15:06:50.054996   72173 api_server.go:88] waiting for apiserver healthz status ...
	I1014 15:06:50.055020   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:06:50.061606   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 200:
	ok
	I1014 15:06:50.063380   72173 api_server.go:141] control plane version: v1.31.1
	I1014 15:06:50.063411   72173 api_server.go:131] duration metric: took 8.40661ms to wait for apiserver health ...
	I1014 15:06:50.063421   72173 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 15:06:50.239258   72173 system_pods.go:59] 9 kube-system pods found
	I1014 15:06:50.239286   72173 system_pods.go:61] "coredns-7c65d6cfc9-6bmwg" [7cf9ad75-b75b-4cce-aad8-d68a810a5d0a] Running
	I1014 15:06:50.239292   72173 system_pods.go:61] "coredns-7c65d6cfc9-l95hj" [6563de05-ef49-4fa9-bf0b-a826fbc8bb14] Running
	I1014 15:06:50.239295   72173 system_pods.go:61] "etcd-embed-certs-989166" [8d29b784-a336-4cb9-ac24-3e9e129e4f49] Running
	I1014 15:06:50.239299   72173 system_pods.go:61] "kube-apiserver-embed-certs-989166" [a98c0a3d-0fd7-4f02-8d61-93f8cada740e] Running
	I1014 15:06:50.239303   72173 system_pods.go:61] "kube-controller-manager-embed-certs-989166" [e3146331-cd59-4a34-8ca8-c9637acdb687] Running
	I1014 15:06:50.239305   72173 system_pods.go:61] "kube-proxy-g572s" [5d2e4a08-5d05-48ab-8fbe-3bb3fe2f77ab] Running
	I1014 15:06:50.239308   72173 system_pods.go:61] "kube-scheduler-embed-certs-989166" [fd61dc8f-51aa-43ce-8e6b-8be0c50073fc] Running
	I1014 15:06:50.239314   72173 system_pods.go:61] "metrics-server-6867b74b74-jl6pp" [c244e53d-c492-426a-be7f-d405f2defd17] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:06:50.239317   72173 system_pods.go:61] "storage-provisioner" [ad6caa59-bc75-4e8f-8052-86d963b92fe3] Running
	I1014 15:06:50.239325   72173 system_pods.go:74] duration metric: took 175.89649ms to wait for pod list to return data ...
	I1014 15:06:50.239334   72173 default_sa.go:34] waiting for default service account to be created ...
	I1014 15:06:50.435980   72173 default_sa.go:45] found service account: "default"
	I1014 15:06:50.436007   72173 default_sa.go:55] duration metric: took 196.667838ms for default service account to be created ...
	I1014 15:06:50.436017   72173 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 15:06:50.639185   72173 system_pods.go:86] 9 kube-system pods found
	I1014 15:06:50.639224   72173 system_pods.go:89] "coredns-7c65d6cfc9-6bmwg" [7cf9ad75-b75b-4cce-aad8-d68a810a5d0a] Running
	I1014 15:06:50.639234   72173 system_pods.go:89] "coredns-7c65d6cfc9-l95hj" [6563de05-ef49-4fa9-bf0b-a826fbc8bb14] Running
	I1014 15:06:50.639241   72173 system_pods.go:89] "etcd-embed-certs-989166" [8d29b784-a336-4cb9-ac24-3e9e129e4f49] Running
	I1014 15:06:50.639248   72173 system_pods.go:89] "kube-apiserver-embed-certs-989166" [a98c0a3d-0fd7-4f02-8d61-93f8cada740e] Running
	I1014 15:06:50.639254   72173 system_pods.go:89] "kube-controller-manager-embed-certs-989166" [e3146331-cd59-4a34-8ca8-c9637acdb687] Running
	I1014 15:06:50.639262   72173 system_pods.go:89] "kube-proxy-g572s" [5d2e4a08-5d05-48ab-8fbe-3bb3fe2f77ab] Running
	I1014 15:06:50.639269   72173 system_pods.go:89] "kube-scheduler-embed-certs-989166" [fd61dc8f-51aa-43ce-8e6b-8be0c50073fc] Running
	I1014 15:06:50.639283   72173 system_pods.go:89] "metrics-server-6867b74b74-jl6pp" [c244e53d-c492-426a-be7f-d405f2defd17] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:06:50.639295   72173 system_pods.go:89] "storage-provisioner" [ad6caa59-bc75-4e8f-8052-86d963b92fe3] Running
	I1014 15:06:50.639309   72173 system_pods.go:126] duration metric: took 203.286322ms to wait for k8s-apps to be running ...
	I1014 15:06:50.639327   72173 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 15:06:50.639388   72173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:06:50.655377   72173 system_svc.go:56] duration metric: took 16.0447ms WaitForService to wait for kubelet
	I1014 15:06:50.655402   72173 kubeadm.go:582] duration metric: took 10.782421893s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 15:06:50.655425   72173 node_conditions.go:102] verifying NodePressure condition ...
	I1014 15:06:50.835507   72173 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 15:06:50.835543   72173 node_conditions.go:123] node cpu capacity is 2
	I1014 15:06:50.835556   72173 node_conditions.go:105] duration metric: took 180.126755ms to run NodePressure ...
	I1014 15:06:50.835570   72173 start.go:241] waiting for startup goroutines ...
	I1014 15:06:50.835580   72173 start.go:246] waiting for cluster config update ...
	I1014 15:06:50.835594   72173 start.go:255] writing updated cluster config ...
	I1014 15:06:50.835924   72173 ssh_runner.go:195] Run: rm -f paused
	I1014 15:06:50.883737   72173 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1014 15:06:50.886200   72173 out.go:177] * Done! kubectl is now configured to use "embed-certs-989166" cluster and "default" namespace by default
	I1014 15:06:51.066344   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:53.566466   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:56.066734   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:58.567007   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:01.066112   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:03.068758   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:05.566174   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:07.566274   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:09.566829   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:10.525694   72639 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1014 15:07:10.526665   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:07:10.526908   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:07:12.066402   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:13.560638   71679 pod_ready.go:82] duration metric: took 4m0.000980901s for pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace to be "Ready" ...
	E1014 15:07:13.560669   71679 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace to be "Ready" (will not retry!)
	I1014 15:07:13.560693   71679 pod_ready.go:39] duration metric: took 4m13.04495779s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:07:13.560725   71679 kubeadm.go:597] duration metric: took 4m21.006404411s to restartPrimaryControlPlane
	W1014 15:07:13.560791   71679 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1014 15:07:13.560823   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 15:07:15.527128   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:07:15.527376   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:07:25.527779   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:07:25.528060   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:07:39.775370   71679 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.214519412s)
	I1014 15:07:39.775448   71679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:07:39.790736   71679 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:07:39.800575   71679 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:07:39.810380   71679 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:07:39.810402   71679 kubeadm.go:157] found existing configuration files:
	
	I1014 15:07:39.810462   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:07:39.819880   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:07:39.819938   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:07:39.830542   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:07:39.840268   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:07:39.840318   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:07:39.849727   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:07:39.858513   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:07:39.858651   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:07:39.869154   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:07:39.878724   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:07:39.878798   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:07:39.888123   71679 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 15:07:39.942676   71679 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1014 15:07:39.942771   71679 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 15:07:40.060558   71679 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 15:07:40.060698   71679 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 15:07:40.060861   71679 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 15:07:40.076085   71679 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 15:07:40.078200   71679 out.go:235]   - Generating certificates and keys ...
	I1014 15:07:40.078301   71679 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 15:07:40.078381   71679 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 15:07:40.078505   71679 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 15:07:40.078620   71679 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1014 15:07:40.078717   71679 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 15:07:40.078794   71679 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1014 15:07:40.078887   71679 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1014 15:07:40.078973   71679 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1014 15:07:40.079069   71679 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 15:07:40.079161   71679 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 15:07:40.079234   71679 kubeadm.go:310] [certs] Using the existing "sa" key
	I1014 15:07:40.079315   71679 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 15:07:40.177082   71679 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 15:07:40.264965   71679 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 15:07:40.415660   71679 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 15:07:40.556759   71679 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 15:07:40.727152   71679 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 15:07:40.727573   71679 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 15:07:40.730409   71679 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 15:07:40.732204   71679 out.go:235]   - Booting up control plane ...
	I1014 15:07:40.732328   71679 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 15:07:40.732440   71679 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 15:07:40.732529   71679 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 15:07:40.751839   71679 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 15:07:40.758034   71679 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 15:07:40.758095   71679 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 15:07:40.895135   71679 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 15:07:40.895254   71679 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 15:07:41.397066   71679 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.194797ms
	I1014 15:07:41.397209   71679 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1014 15:07:46.401247   71679 kubeadm.go:310] [api-check] The API server is healthy after 5.002197966s
	I1014 15:07:46.419134   71679 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 15:07:46.433128   71679 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 15:07:46.477079   71679 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 15:07:46.477289   71679 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-813300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 15:07:46.492703   71679 kubeadm.go:310] [bootstrap-token] Using token: 1vsv04.mf3pqj2ow157sq8h
	I1014 15:07:46.494314   71679 out.go:235]   - Configuring RBAC rules ...
	I1014 15:07:46.494467   71679 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 15:07:46.501090   71679 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 15:07:46.515987   71679 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 15:07:46.522417   71679 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 15:07:46.528612   71679 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 15:07:46.536975   71679 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 15:07:46.810642   71679 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 15:07:47.240531   71679 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1014 15:07:47.810279   71679 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1014 15:07:47.811169   71679 kubeadm.go:310] 
	I1014 15:07:47.811230   71679 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1014 15:07:47.811238   71679 kubeadm.go:310] 
	I1014 15:07:47.811307   71679 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1014 15:07:47.811312   71679 kubeadm.go:310] 
	I1014 15:07:47.811335   71679 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1014 15:07:47.811388   71679 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 15:07:47.811440   71679 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 15:07:47.811447   71679 kubeadm.go:310] 
	I1014 15:07:47.811501   71679 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1014 15:07:47.811507   71679 kubeadm.go:310] 
	I1014 15:07:47.811546   71679 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 15:07:47.811553   71679 kubeadm.go:310] 
	I1014 15:07:47.811600   71679 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1014 15:07:47.811667   71679 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 15:07:47.811755   71679 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 15:07:47.811771   71679 kubeadm.go:310] 
	I1014 15:07:47.811844   71679 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 15:07:47.811912   71679 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1014 15:07:47.811921   71679 kubeadm.go:310] 
	I1014 15:07:47.811999   71679 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1vsv04.mf3pqj2ow157sq8h \
	I1014 15:07:47.812091   71679 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 \
	I1014 15:07:47.812139   71679 kubeadm.go:310] 	--control-plane 
	I1014 15:07:47.812153   71679 kubeadm.go:310] 
	I1014 15:07:47.812231   71679 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1014 15:07:47.812238   71679 kubeadm.go:310] 
	I1014 15:07:47.812306   71679 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1vsv04.mf3pqj2ow157sq8h \
	I1014 15:07:47.812393   71679 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 
	I1014 15:07:47.814071   71679 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 15:07:47.814103   71679 cni.go:84] Creating CNI manager for ""
	I1014 15:07:47.814113   71679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:07:47.816033   71679 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 15:07:45.528527   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:07:45.528768   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:07:47.817325   71679 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 15:07:47.829639   71679 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 15:07:47.847797   71679 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 15:07:47.847857   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:47.847929   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-813300 minikube.k8s.io/updated_at=2024_10_14T15_07_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=no-preload-813300 minikube.k8s.io/primary=true
	I1014 15:07:48.039959   71679 ops.go:34] apiserver oom_adj: -16
	I1014 15:07:48.040095   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:48.540295   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:49.040911   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:49.540233   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:50.040146   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:50.540494   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:51.041033   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:51.540516   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:52.040935   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:52.146854   71679 kubeadm.go:1113] duration metric: took 4.299055033s to wait for elevateKubeSystemPrivileges
	I1014 15:07:52.146890   71679 kubeadm.go:394] duration metric: took 4m59.642546726s to StartCluster
	I1014 15:07:52.146906   71679 settings.go:142] acquiring lock: {Name:mk9f6f6b9dc8c3435472077cbb9091b8e648d1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:07:52.146987   71679 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 15:07:52.148782   71679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/kubeconfig: {Name:mk17dd47b52fd7a8ee17f563c35b22ef1b7788f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:07:52.149067   71679 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.13 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 15:07:52.149168   71679 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 15:07:52.149303   71679 addons.go:69] Setting storage-provisioner=true in profile "no-preload-813300"
	I1014 15:07:52.149333   71679 addons.go:234] Setting addon storage-provisioner=true in "no-preload-813300"
	I1014 15:07:52.149342   71679 config.go:182] Loaded profile config "no-preload-813300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	W1014 15:07:52.149355   71679 addons.go:243] addon storage-provisioner should already be in state true
	I1014 15:07:52.149378   71679 addons.go:69] Setting default-storageclass=true in profile "no-preload-813300"
	I1014 15:07:52.149390   71679 host.go:66] Checking if "no-preload-813300" exists ...
	I1014 15:07:52.149412   71679 addons.go:69] Setting metrics-server=true in profile "no-preload-813300"
	I1014 15:07:52.149447   71679 addons.go:234] Setting addon metrics-server=true in "no-preload-813300"
	W1014 15:07:52.149461   71679 addons.go:243] addon metrics-server should already be in state true
	I1014 15:07:52.149494   71679 host.go:66] Checking if "no-preload-813300" exists ...
	I1014 15:07:52.149421   71679 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-813300"
	I1014 15:07:52.149748   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.149789   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.149861   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.149890   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.149905   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.149928   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.150482   71679 out.go:177] * Verifying Kubernetes components...
	I1014 15:07:52.152252   71679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:07:52.167205   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38341
	I1014 15:07:52.170723   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45457
	I1014 15:07:52.170742   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.170728   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39829
	I1014 15:07:52.171111   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.171302   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.171321   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.171386   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.171678   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.171702   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.171717   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.171900   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.171916   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.172164   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.172243   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.172279   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.172325   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.172386   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetState
	I1014 15:07:52.172868   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.172916   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.175482   71679 addons.go:234] Setting addon default-storageclass=true in "no-preload-813300"
	W1014 15:07:52.175502   71679 addons.go:243] addon default-storageclass should already be in state true
	I1014 15:07:52.175529   71679 host.go:66] Checking if "no-preload-813300" exists ...
	I1014 15:07:52.175763   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.175792   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.190835   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46633
	I1014 15:07:52.191422   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.191767   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39613
	I1014 15:07:52.191901   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35293
	I1014 15:07:52.192010   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.192027   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.192317   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.192436   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.192481   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.192988   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.193010   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.192992   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.193060   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.193474   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.193524   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.193530   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.193563   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.193729   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetState
	I1014 15:07:52.193770   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetState
	I1014 15:07:52.195702   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:07:52.195770   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:07:52.197642   71679 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1014 15:07:52.197652   71679 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:07:52.198957   71679 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1014 15:07:52.198978   71679 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1014 15:07:52.198998   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:07:52.199075   71679 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 15:07:52.199096   71679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 15:07:52.199111   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:07:52.202637   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:07:52.203064   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:07:52.203088   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:07:52.203245   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:07:52.203515   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:07:52.203519   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:07:52.203663   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:07:52.203812   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:07:52.203878   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:07:52.203903   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:07:52.204187   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:07:52.204377   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:07:52.204535   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:07:52.204683   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:07:52.231332   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38395
	I1014 15:07:52.231813   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.232320   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.232344   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.232645   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.232836   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetState
	I1014 15:07:52.234309   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:07:52.234570   71679 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 15:07:52.234585   71679 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 15:07:52.234622   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:07:52.237749   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:07:52.238364   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:07:52.238393   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:07:52.238562   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:07:52.238744   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:07:52.238903   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:07:52.239031   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:07:52.375830   71679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:07:52.401606   71679 node_ready.go:35] waiting up to 6m0s for node "no-preload-813300" to be "Ready" ...
	I1014 15:07:52.431363   71679 node_ready.go:49] node "no-preload-813300" has status "Ready":"True"
	I1014 15:07:52.431393   71679 node_ready.go:38] duration metric: took 29.758277ms for node "no-preload-813300" to be "Ready" ...
	I1014 15:07:52.431405   71679 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:07:52.446747   71679 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fjzn8" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:52.501642   71679 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1014 15:07:52.501664   71679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1014 15:07:52.509733   71679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 15:07:52.515833   71679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 15:07:52.536485   71679 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1014 15:07:52.536508   71679 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1014 15:07:52.622269   71679 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 15:07:52.622299   71679 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1014 15:07:52.702873   71679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 15:07:52.909827   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:52.909865   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:52.910194   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:52.910209   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:52.910235   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:52.910249   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:52.910510   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:52.910525   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:52.918161   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:52.918182   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:52.918473   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:52.918493   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:52.918480   71679 main.go:141] libmachine: (no-preload-813300) DBG | Closing plugin on server side
	I1014 15:07:53.707659   71679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.191781585s)
	I1014 15:07:53.707706   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:53.707719   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:53.708011   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:53.708035   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:53.708052   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:53.708062   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:53.708330   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:53.708346   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:54.060665   71679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.357747934s)
	I1014 15:07:54.060752   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:54.060770   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:54.061069   71679 main.go:141] libmachine: (no-preload-813300) DBG | Closing plugin on server side
	I1014 15:07:54.061153   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:54.061164   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:54.061173   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:54.061184   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:54.062712   71679 main.go:141] libmachine: (no-preload-813300) DBG | Closing plugin on server side
	I1014 15:07:54.062787   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:54.062797   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:54.062811   71679 addons.go:475] Verifying addon metrics-server=true in "no-preload-813300"
	I1014 15:07:54.064762   71679 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1014 15:07:54.066623   71679 addons.go:510] duration metric: took 1.917465271s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1014 15:07:54.454216   71679 pod_ready.go:103] pod "coredns-7c65d6cfc9-fjzn8" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:56.455649   71679 pod_ready.go:93] pod "coredns-7c65d6cfc9-fjzn8" in "kube-system" namespace has status "Ready":"True"
	I1014 15:07:56.455674   71679 pod_ready.go:82] duration metric: took 4.00889709s for pod "coredns-7c65d6cfc9-fjzn8" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:56.455689   71679 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nvpvl" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:58.461687   71679 pod_ready.go:103] pod "coredns-7c65d6cfc9-nvpvl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:59.962360   71679 pod_ready.go:93] pod "coredns-7c65d6cfc9-nvpvl" in "kube-system" namespace has status "Ready":"True"
	I1014 15:07:59.962382   71679 pod_ready.go:82] duration metric: took 3.506686516s for pod "coredns-7c65d6cfc9-nvpvl" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.962391   71679 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.969241   71679 pod_ready.go:93] pod "etcd-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:07:59.969261   71679 pod_ready.go:82] duration metric: took 6.864356ms for pod "etcd-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.969270   71679 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.974810   71679 pod_ready.go:93] pod "kube-apiserver-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:07:59.974828   71679 pod_ready.go:82] duration metric: took 5.552122ms for pod "kube-apiserver-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.974837   71679 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.979555   71679 pod_ready.go:93] pod "kube-controller-manager-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:07:59.979580   71679 pod_ready.go:82] duration metric: took 4.735265ms for pod "kube-controller-manager-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.979592   71679 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-54rrd" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.985111   71679 pod_ready.go:93] pod "kube-proxy-54rrd" in "kube-system" namespace has status "Ready":"True"
	I1014 15:07:59.985138   71679 pod_ready.go:82] duration metric: took 5.538126ms for pod "kube-proxy-54rrd" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.985150   71679 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:08:00.359524   71679 pod_ready.go:93] pod "kube-scheduler-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:08:00.359548   71679 pod_ready.go:82] duration metric: took 374.389838ms for pod "kube-scheduler-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:08:00.359558   71679 pod_ready.go:39] duration metric: took 7.928141116s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:08:00.359575   71679 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:08:00.359626   71679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:08:00.376115   71679 api_server.go:72] duration metric: took 8.22700683s to wait for apiserver process to appear ...
	I1014 15:08:00.376144   71679 api_server.go:88] waiting for apiserver healthz status ...
	I1014 15:08:00.376169   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:08:00.381225   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 200:
	ok
	I1014 15:08:00.382348   71679 api_server.go:141] control plane version: v1.31.1
	I1014 15:08:00.382377   71679 api_server.go:131] duration metric: took 6.225832ms to wait for apiserver health ...
	I1014 15:08:00.382386   71679 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 15:08:00.563350   71679 system_pods.go:59] 9 kube-system pods found
	I1014 15:08:00.563382   71679 system_pods.go:61] "coredns-7c65d6cfc9-fjzn8" [7850936e-8104-4e8f-a4cc-948579963790] Running
	I1014 15:08:00.563386   71679 system_pods.go:61] "coredns-7c65d6cfc9-nvpvl" [d926987d-9c61-4bf6-83e3-97334715e1d5] Running
	I1014 15:08:00.563390   71679 system_pods.go:61] "etcd-no-preload-813300" [e5895ac5-7829-4d8c-b5be-d621dbba78bd] Running
	I1014 15:08:00.563394   71679 system_pods.go:61] "kube-apiserver-no-preload-813300" [a30389db-98c0-49e3-8a9b-f3414e62c09a] Running
	I1014 15:08:00.563399   71679 system_pods.go:61] "kube-controller-manager-no-preload-813300" [f710bd35-f215-4aa1-96a9-fb5be44d04cc] Running
	I1014 15:08:00.563402   71679 system_pods.go:61] "kube-proxy-54rrd" [0c8ab0de-c204-46f5-a725-5dcd9eff59d8] Running
	I1014 15:08:00.563405   71679 system_pods.go:61] "kube-scheduler-no-preload-813300" [5386153a-f569-4332-b448-2a000f7a16bb] Running
	I1014 15:08:00.563412   71679 system_pods.go:61] "metrics-server-6867b74b74-8vfll" [cf3594da-9896-49ed-b47f-5bbea36c9aaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:08:00.563416   71679 system_pods.go:61] "storage-provisioner" [2d79bfdf-bda5-42bf-8ddf-73d7df4855db] Running
	I1014 15:08:00.563424   71679 system_pods.go:74] duration metric: took 181.032852ms to wait for pod list to return data ...
	I1014 15:08:00.563436   71679 default_sa.go:34] waiting for default service account to be created ...
	I1014 15:08:00.760054   71679 default_sa.go:45] found service account: "default"
	I1014 15:08:00.760084   71679 default_sa.go:55] duration metric: took 196.637678ms for default service account to be created ...
	I1014 15:08:00.760095   71679 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 15:08:00.962545   71679 system_pods.go:86] 9 kube-system pods found
	I1014 15:08:00.962577   71679 system_pods.go:89] "coredns-7c65d6cfc9-fjzn8" [7850936e-8104-4e8f-a4cc-948579963790] Running
	I1014 15:08:00.962583   71679 system_pods.go:89] "coredns-7c65d6cfc9-nvpvl" [d926987d-9c61-4bf6-83e3-97334715e1d5] Running
	I1014 15:08:00.962587   71679 system_pods.go:89] "etcd-no-preload-813300" [e5895ac5-7829-4d8c-b5be-d621dbba78bd] Running
	I1014 15:08:00.962591   71679 system_pods.go:89] "kube-apiserver-no-preload-813300" [a30389db-98c0-49e3-8a9b-f3414e62c09a] Running
	I1014 15:08:00.962605   71679 system_pods.go:89] "kube-controller-manager-no-preload-813300" [f710bd35-f215-4aa1-96a9-fb5be44d04cc] Running
	I1014 15:08:00.962609   71679 system_pods.go:89] "kube-proxy-54rrd" [0c8ab0de-c204-46f5-a725-5dcd9eff59d8] Running
	I1014 15:08:00.962613   71679 system_pods.go:89] "kube-scheduler-no-preload-813300" [5386153a-f569-4332-b448-2a000f7a16bb] Running
	I1014 15:08:00.962619   71679 system_pods.go:89] "metrics-server-6867b74b74-8vfll" [cf3594da-9896-49ed-b47f-5bbea36c9aaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:08:00.962623   71679 system_pods.go:89] "storage-provisioner" [2d79bfdf-bda5-42bf-8ddf-73d7df4855db] Running
	I1014 15:08:00.962633   71679 system_pods.go:126] duration metric: took 202.532202ms to wait for k8s-apps to be running ...
	I1014 15:08:00.962640   71679 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 15:08:00.962682   71679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:08:00.980272   71679 system_svc.go:56] duration metric: took 17.624381ms WaitForService to wait for kubelet
	I1014 15:08:00.980310   71679 kubeadm.go:582] duration metric: took 8.831207019s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 15:08:00.980333   71679 node_conditions.go:102] verifying NodePressure condition ...
	I1014 15:08:01.160914   71679 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 15:08:01.160947   71679 node_conditions.go:123] node cpu capacity is 2
	I1014 15:08:01.160961   71679 node_conditions.go:105] duration metric: took 180.622279ms to run NodePressure ...
	I1014 15:08:01.160976   71679 start.go:241] waiting for startup goroutines ...
	I1014 15:08:01.160985   71679 start.go:246] waiting for cluster config update ...
	I1014 15:08:01.161000   71679 start.go:255] writing updated cluster config ...
	I1014 15:08:01.161357   71679 ssh_runner.go:195] Run: rm -f paused
	I1014 15:08:01.212486   71679 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1014 15:08:01.215083   71679 out.go:177] * Done! kubectl is now configured to use "no-preload-813300" cluster and "default" namespace by default
	I1014 15:08:25.530669   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:08:25.530970   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:08:25.530998   72639 kubeadm.go:310] 
	I1014 15:08:25.531059   72639 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1014 15:08:25.531114   72639 kubeadm.go:310] 		timed out waiting for the condition
	I1014 15:08:25.531125   72639 kubeadm.go:310] 
	I1014 15:08:25.531177   72639 kubeadm.go:310] 	This error is likely caused by:
	I1014 15:08:25.531238   72639 kubeadm.go:310] 		- The kubelet is not running
	I1014 15:08:25.531381   72639 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1014 15:08:25.531392   72639 kubeadm.go:310] 
	I1014 15:08:25.531527   72639 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1014 15:08:25.531587   72639 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1014 15:08:25.531633   72639 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1014 15:08:25.531643   72639 kubeadm.go:310] 
	I1014 15:08:25.531766   72639 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1014 15:08:25.531872   72639 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 15:08:25.531891   72639 kubeadm.go:310] 
	I1014 15:08:25.532038   72639 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1014 15:08:25.532174   72639 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 15:08:25.532281   72639 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1014 15:08:25.532377   72639 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1014 15:08:25.532418   72639 kubeadm.go:310] 
	I1014 15:08:25.532543   72639 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 15:08:25.532640   72639 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1014 15:08:25.532742   72639 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1014 15:08:25.532833   72639 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1014 15:08:25.532870   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 15:08:31.003635   72639 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.470741012s)
	I1014 15:08:31.003724   72639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:08:31.018666   72639 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:08:31.029707   72639 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:08:31.029729   72639 kubeadm.go:157] found existing configuration files:
	
	I1014 15:08:31.029776   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:08:31.039554   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:08:31.039625   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:08:31.049748   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:08:31.059618   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:08:31.059682   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:08:31.069369   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:08:31.078321   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:08:31.078385   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:08:31.088006   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:08:31.096681   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:08:31.096742   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:08:31.106269   72639 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 15:08:31.182768   72639 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1014 15:08:31.182833   72639 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 15:08:31.341660   72639 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 15:08:31.341833   72639 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 15:08:31.342008   72639 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1014 15:08:31.538731   72639 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 15:08:31.540933   72639 out.go:235]   - Generating certificates and keys ...
	I1014 15:08:31.541037   72639 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 15:08:31.541124   72639 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 15:08:31.541270   72639 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 15:08:31.541386   72639 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1014 15:08:31.541486   72639 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 15:08:31.541559   72639 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1014 15:08:31.541663   72639 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1014 15:08:31.541750   72639 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1014 15:08:31.542000   72639 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 15:08:31.542534   72639 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 15:08:31.542627   72639 kubeadm.go:310] [certs] Using the existing "sa" key
	I1014 15:08:31.542711   72639 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 15:08:31.847005   72639 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 15:08:32.049586   72639 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 15:08:32.355652   72639 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 15:08:32.511031   72639 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 15:08:32.526310   72639 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 15:08:32.526755   72639 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 15:08:32.526841   72639 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 15:08:32.665898   72639 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 15:08:32.667688   72639 out.go:235]   - Booting up control plane ...
	I1014 15:08:32.667806   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 15:08:32.681232   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 15:08:32.682929   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 15:08:32.683704   72639 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 15:08:32.685936   72639 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1014 15:09:12.687998   72639 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1014 15:09:12.688248   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:09:12.688517   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:09:17.689026   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:09:17.689213   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:09:27.689821   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:09:27.690119   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:09:47.690936   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:09:47.691185   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:10:27.691438   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:10:27.691721   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:10:27.691744   72639 kubeadm.go:310] 
	I1014 15:10:27.691779   72639 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1014 15:10:27.691847   72639 kubeadm.go:310] 		timed out waiting for the condition
	I1014 15:10:27.691867   72639 kubeadm.go:310] 
	I1014 15:10:27.691907   72639 kubeadm.go:310] 	This error is likely caused by:
	I1014 15:10:27.691972   72639 kubeadm.go:310] 		- The kubelet is not running
	I1014 15:10:27.692124   72639 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1014 15:10:27.692136   72639 kubeadm.go:310] 
	I1014 15:10:27.692253   72639 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1014 15:10:27.692311   72639 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1014 15:10:27.692352   72639 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1014 15:10:27.692363   72639 kubeadm.go:310] 
	I1014 15:10:27.692497   72639 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1014 15:10:27.692617   72639 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 15:10:27.692633   72639 kubeadm.go:310] 
	I1014 15:10:27.692787   72639 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1014 15:10:27.692915   72639 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 15:10:27.693051   72639 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1014 15:10:27.693146   72639 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1014 15:10:27.693158   72639 kubeadm.go:310] 
	I1014 15:10:27.693497   72639 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 15:10:27.693627   72639 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1014 15:10:27.693710   72639 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1014 15:10:27.693770   72639 kubeadm.go:394] duration metric: took 8m7.905137486s to StartCluster
	I1014 15:10:27.693808   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:10:27.693863   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:10:27.735373   72639 cri.go:89] found id: ""
	I1014 15:10:27.735410   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.735419   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:10:27.735425   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:10:27.735484   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:10:27.775691   72639 cri.go:89] found id: ""
	I1014 15:10:27.775713   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.775721   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:10:27.775727   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:10:27.775778   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:10:27.811621   72639 cri.go:89] found id: ""
	I1014 15:10:27.811645   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.811653   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:10:27.811658   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:10:27.811718   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:10:27.850894   72639 cri.go:89] found id: ""
	I1014 15:10:27.850917   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.850925   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:10:27.850931   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:10:27.850979   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:10:27.891559   72639 cri.go:89] found id: ""
	I1014 15:10:27.891596   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.891608   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:10:27.891616   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:10:27.891671   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:10:27.929896   72639 cri.go:89] found id: ""
	I1014 15:10:27.929929   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.929942   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:10:27.930002   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:10:27.930096   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:10:27.964801   72639 cri.go:89] found id: ""
	I1014 15:10:27.964828   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.964839   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:10:27.964845   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:10:27.964905   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:10:28.011737   72639 cri.go:89] found id: ""
	I1014 15:10:28.011761   72639 logs.go:282] 0 containers: []
	W1014 15:10:28.011769   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:10:28.011777   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:10:28.011788   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:10:28.088053   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:10:28.088082   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:10:28.088098   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:10:28.214495   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:10:28.214531   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:10:28.254766   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:10:28.254796   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:10:28.304942   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:10:28.304977   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1014 15:10:28.319674   72639 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1014 15:10:28.319729   72639 out.go:270] * 
	W1014 15:10:28.319783   72639 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 15:10:28.319802   72639 out.go:270] * 
	W1014 15:10:28.320716   72639 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 15:10:28.324551   72639 out.go:201] 
	W1014 15:10:28.325905   72639 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 15:10:28.325940   72639 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1014 15:10:28.325985   72639 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1014 15:10:28.327473   72639 out.go:201] 
	
	
	==> CRI-O <==
	Oct 14 15:17:03 no-preload-813300 crio[711]: time="2024-10-14 15:17:03.313748817Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f1da156d-56cb-4be3-bc80-3ac50477b768 name=/runtime.v1.RuntimeService/Version
	Oct 14 15:17:03 no-preload-813300 crio[711]: time="2024-10-14 15:17:03.316510868Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a9803526-31b2-4fbf-b9a1-d46f9caeae65 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:17:03 no-preload-813300 crio[711]: time="2024-10-14 15:17:03.316969414Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919023316940194,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a9803526-31b2-4fbf-b9a1-d46f9caeae65 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:17:03 no-preload-813300 crio[711]: time="2024-10-14 15:17:03.317419935Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2783d7dc-e7de-4916-8292-815e780e8868 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:17:03 no-preload-813300 crio[711]: time="2024-10-14 15:17:03.317495142Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2783d7dc-e7de-4916-8292-815e780e8868 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:17:03 no-preload-813300 crio[711]: time="2024-10-14 15:17:03.317754411Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2fe5212fe3ebb4271fe4f9776bdc95ea7bbd4aea70456281189b86f4d9323675,PodSandboxId:f65d8d057416b9747163461409fb02e63baaccda87f3ed4616b7db10021cb917,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728918474205353910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d79bfdf-bda5-42bf-8ddf-73d7df4855db,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03f2753934798cf6791abe08ae9795458185ad9eac0059ead3d1cb94cd908b3d,PodSandboxId:a2836801e53a0d69e53404fa44e5147c5184d5687c2b8897aac5baefa29d07c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728918473444794126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nvpvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d926987d-9c61-4bf6-83e3-97334715e1d5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:739b2529f0fdf4a73a5502b2fc856d948eccb8dbbae56a6b9d08c0413c0279ad,PodSandboxId:6a29826877e8796338270b89184f647f0265e8cf65ed745db5eef2d9500d98a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728918473219503640,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fjzn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78
50936e-8104-4e8f-a4cc-948579963790,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:842c65533db30209091d4a7fd1a556d412dd451dfa31d61fa9b9090e674419a6,PodSandboxId:2da3b6fbd747bb62556d4178f5039822638d219381eb8488d84370503154cc03,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1728918472871037421,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-54rrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8ab0de-c204-46f5-a725-5dcd9eff59d8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6762e30b49a928c9391f017d8bf782823c7777cf1fab83d160db4ebf055e519c,PodSandboxId:d3a7cdecaacf24ac8239e552cacac1cfb68a89a56a497a73033d798ed1c5a708,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728918461828949591,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 808894b816cffed524db94d6e34a052d,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ac34ee741ac4e85fbce7777baead20a19c84896009f4671d0c3a9aa96182858,PodSandboxId:6ee6ad98eab10d0d44de29ae2a3704f4727b02d09dcfc02a80999ad6df9a778a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:172891846182623
6832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a005d01945afa403b756193f11f3824f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af736f6784dc6fdd444e1b9d9ab0c2c185a42d68085dcbe37a46cfec63664031,PodSandboxId:de8829f46ea7a44e25078557862449681d75c537522a6df279f3687225512725,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728918461760964338,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdbaadf2aa4ad3fc6f15ade30860d76d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:870d6b62c80ba13f1a28f47e62cae635ae185e0169f6fd474843642b7fd1b867,PodSandboxId:efda81733a2d239762e184a01912729b69a91016df06b7ba3933aa88de48c782,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728918461771646799,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16ad1f7c7ca791817a445f2eb5192551,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62baf067c7938f117bd93de058d98f004012ebe1fcf8caae9b96bcc24016757b,PodSandboxId:3eafb95cf605fed076cb225caa76fd763cdc9bb555510b96b096e6bd270b52f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728918174775447701,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a005d01945afa403b756193f11f3824f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2783d7dc-e7de-4916-8292-815e780e8868 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:17:03 no-preload-813300 crio[711]: time="2024-10-14 15:17:03.357939343Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ddd7b06d-26f8-4cbd-99ef-1050f51b15e6 name=/runtime.v1.RuntimeService/Version
	Oct 14 15:17:03 no-preload-813300 crio[711]: time="2024-10-14 15:17:03.358031498Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ddd7b06d-26f8-4cbd-99ef-1050f51b15e6 name=/runtime.v1.RuntimeService/Version
	Oct 14 15:17:03 no-preload-813300 crio[711]: time="2024-10-14 15:17:03.360134221Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=aa5d783e-0c73-43e7-be3e-f0b50fac7ee9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:17:03 no-preload-813300 crio[711]: time="2024-10-14 15:17:03.360483010Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919023360458607,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=aa5d783e-0c73-43e7-be3e-f0b50fac7ee9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:17:03 no-preload-813300 crio[711]: time="2024-10-14 15:17:03.361145452Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6b25fa57-492f-4f34-a995-f208426d4932 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:17:03 no-preload-813300 crio[711]: time="2024-10-14 15:17:03.361213013Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6b25fa57-492f-4f34-a995-f208426d4932 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:17:03 no-preload-813300 crio[711]: time="2024-10-14 15:17:03.361427056Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2fe5212fe3ebb4271fe4f9776bdc95ea7bbd4aea70456281189b86f4d9323675,PodSandboxId:f65d8d057416b9747163461409fb02e63baaccda87f3ed4616b7db10021cb917,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728918474205353910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d79bfdf-bda5-42bf-8ddf-73d7df4855db,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03f2753934798cf6791abe08ae9795458185ad9eac0059ead3d1cb94cd908b3d,PodSandboxId:a2836801e53a0d69e53404fa44e5147c5184d5687c2b8897aac5baefa29d07c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728918473444794126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nvpvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d926987d-9c61-4bf6-83e3-97334715e1d5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:739b2529f0fdf4a73a5502b2fc856d948eccb8dbbae56a6b9d08c0413c0279ad,PodSandboxId:6a29826877e8796338270b89184f647f0265e8cf65ed745db5eef2d9500d98a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728918473219503640,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fjzn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78
50936e-8104-4e8f-a4cc-948579963790,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:842c65533db30209091d4a7fd1a556d412dd451dfa31d61fa9b9090e674419a6,PodSandboxId:2da3b6fbd747bb62556d4178f5039822638d219381eb8488d84370503154cc03,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1728918472871037421,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-54rrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8ab0de-c204-46f5-a725-5dcd9eff59d8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6762e30b49a928c9391f017d8bf782823c7777cf1fab83d160db4ebf055e519c,PodSandboxId:d3a7cdecaacf24ac8239e552cacac1cfb68a89a56a497a73033d798ed1c5a708,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728918461828949591,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 808894b816cffed524db94d6e34a052d,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ac34ee741ac4e85fbce7777baead20a19c84896009f4671d0c3a9aa96182858,PodSandboxId:6ee6ad98eab10d0d44de29ae2a3704f4727b02d09dcfc02a80999ad6df9a778a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:172891846182623
6832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a005d01945afa403b756193f11f3824f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af736f6784dc6fdd444e1b9d9ab0c2c185a42d68085dcbe37a46cfec63664031,PodSandboxId:de8829f46ea7a44e25078557862449681d75c537522a6df279f3687225512725,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728918461760964338,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdbaadf2aa4ad3fc6f15ade30860d76d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:870d6b62c80ba13f1a28f47e62cae635ae185e0169f6fd474843642b7fd1b867,PodSandboxId:efda81733a2d239762e184a01912729b69a91016df06b7ba3933aa88de48c782,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728918461771646799,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16ad1f7c7ca791817a445f2eb5192551,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62baf067c7938f117bd93de058d98f004012ebe1fcf8caae9b96bcc24016757b,PodSandboxId:3eafb95cf605fed076cb225caa76fd763cdc9bb555510b96b096e6bd270b52f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728918174775447701,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a005d01945afa403b756193f11f3824f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6b25fa57-492f-4f34-a995-f208426d4932 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:17:03 no-preload-813300 crio[711]: time="2024-10-14 15:17:03.365634624Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=51caa8e7-598e-40f5-9964-34c4302f6d35 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 14 15:17:03 no-preload-813300 crio[711]: time="2024-10-14 15:17:03.365897528Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:74d6977fd0b2a22c3b778de25491a9021025f82bcd20c87cf09c9a93306f58b3,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-8vfll,Uid:cf3594da-9896-49ed-b47f-5bbea36c9aaf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728918474184626780,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-8vfll,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf3594da-9896-49ed-b47f-5bbea36c9aaf,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-14T15:07:53.874062559Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f65d8d057416b9747163461409fb02e63baaccda87f3ed4616b7db10021cb917,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:2d79bfdf-bda5-42bf-8ddf-73d7df4855db,Na
mespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728918474007493566,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d79bfdf-bda5-42bf-8ddf-73d7df4855db,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volu
mes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-10-14T15:07:53.700042508Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6a29826877e8796338270b89184f647f0265e8cf65ed745db5eef2d9500d98a0,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-fjzn8,Uid:7850936e-8104-4e8f-a4cc-948579963790,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728918472710117347,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-fjzn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7850936e-8104-4e8f-a4cc-948579963790,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-14T15:07:52.400962973Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a2836801e53a0d69e53404fa44e5147c5184d5687c2b8897aac5baefa29d07c7,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-nvpvl,Uid:d926987d-9c61-4bf6-
83e3-97334715e1d5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728918472669447953,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-nvpvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d926987d-9c61-4bf6-83e3-97334715e1d5,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-14T15:07:52.360211001Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2da3b6fbd747bb62556d4178f5039822638d219381eb8488d84370503154cc03,Metadata:&PodSandboxMetadata{Name:kube-proxy-54rrd,Uid:0c8ab0de-c204-46f5-a725-5dcd9eff59d8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728918472553437079,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-54rrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8ab0de-c204-46f5-a725-5dcd9eff59d8,k8s-app: kube-proxy,pod-temp
late-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-14T15:07:52.245955288Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6ee6ad98eab10d0d44de29ae2a3704f4727b02d09dcfc02a80999ad6df9a778a,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-813300,Uid:a005d01945afa403b756193f11f3824f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1728918461596154383,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a005d01945afa403b756193f11f3824f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.13:8443,kubernetes.io/config.hash: a005d01945afa403b756193f11f3824f,kubernetes.io/config.seen: 2024-10-14T15:07:41.141273004Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d3a7cdecaacf24ac8239e552cacac1cf
b68a89a56a497a73033d798ed1c5a708,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-813300,Uid:808894b816cffed524db94d6e34a052d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728918461584162120,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 808894b816cffed524db94d6e34a052d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 808894b816cffed524db94d6e34a052d,kubernetes.io/config.seen: 2024-10-14T15:07:41.141274241Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:de8829f46ea7a44e25078557862449681d75c537522a6df279f3687225512725,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-813300,Uid:fdbaadf2aa4ad3fc6f15ade30860d76d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728918461572340706,Labels:map[string]string{component: kube-sche
duler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdbaadf2aa4ad3fc6f15ade30860d76d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: fdbaadf2aa4ad3fc6f15ade30860d76d,kubernetes.io/config.seen: 2024-10-14T15:07:41.141275655Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:efda81733a2d239762e184a01912729b69a91016df06b7ba3933aa88de48c782,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-813300,Uid:16ad1f7c7ca791817a445f2eb5192551,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728918461568083954,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16ad1f7c7ca791817a445f2eb5192551,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.13:2379,
kubernetes.io/config.hash: 16ad1f7c7ca791817a445f2eb5192551,kubernetes.io/config.seen: 2024-10-14T15:07:41.141269372Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3eafb95cf605fed076cb225caa76fd763cdc9bb555510b96b096e6bd270b52f7,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-813300,Uid:a005d01945afa403b756193f11f3824f,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1728918174529895755,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a005d01945afa403b756193f11f3824f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.13:8443,kubernetes.io/config.hash: a005d01945afa403b756193f11f3824f,kubernetes.io/config.seen: 2024-10-14T15:02:53.988547060Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/intercep
tors.go:74" id=51caa8e7-598e-40f5-9964-34c4302f6d35 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 14 15:17:03 no-preload-813300 crio[711]: time="2024-10-14 15:17:03.366612883Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9682b216-600e-4e49-89cc-ad22850ff9a9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:17:03 no-preload-813300 crio[711]: time="2024-10-14 15:17:03.366969407Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9682b216-600e-4e49-89cc-ad22850ff9a9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:17:03 no-preload-813300 crio[711]: time="2024-10-14 15:17:03.367175088Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2fe5212fe3ebb4271fe4f9776bdc95ea7bbd4aea70456281189b86f4d9323675,PodSandboxId:f65d8d057416b9747163461409fb02e63baaccda87f3ed4616b7db10021cb917,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728918474205353910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d79bfdf-bda5-42bf-8ddf-73d7df4855db,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03f2753934798cf6791abe08ae9795458185ad9eac0059ead3d1cb94cd908b3d,PodSandboxId:a2836801e53a0d69e53404fa44e5147c5184d5687c2b8897aac5baefa29d07c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728918473444794126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nvpvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d926987d-9c61-4bf6-83e3-97334715e1d5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:739b2529f0fdf4a73a5502b2fc856d948eccb8dbbae56a6b9d08c0413c0279ad,PodSandboxId:6a29826877e8796338270b89184f647f0265e8cf65ed745db5eef2d9500d98a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728918473219503640,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fjzn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78
50936e-8104-4e8f-a4cc-948579963790,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:842c65533db30209091d4a7fd1a556d412dd451dfa31d61fa9b9090e674419a6,PodSandboxId:2da3b6fbd747bb62556d4178f5039822638d219381eb8488d84370503154cc03,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1728918472871037421,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-54rrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8ab0de-c204-46f5-a725-5dcd9eff59d8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6762e30b49a928c9391f017d8bf782823c7777cf1fab83d160db4ebf055e519c,PodSandboxId:d3a7cdecaacf24ac8239e552cacac1cfb68a89a56a497a73033d798ed1c5a708,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728918461828949591,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 808894b816cffed524db94d6e34a052d,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ac34ee741ac4e85fbce7777baead20a19c84896009f4671d0c3a9aa96182858,PodSandboxId:6ee6ad98eab10d0d44de29ae2a3704f4727b02d09dcfc02a80999ad6df9a778a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:172891846182623
6832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a005d01945afa403b756193f11f3824f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af736f6784dc6fdd444e1b9d9ab0c2c185a42d68085dcbe37a46cfec63664031,PodSandboxId:de8829f46ea7a44e25078557862449681d75c537522a6df279f3687225512725,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728918461760964338,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdbaadf2aa4ad3fc6f15ade30860d76d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:870d6b62c80ba13f1a28f47e62cae635ae185e0169f6fd474843642b7fd1b867,PodSandboxId:efda81733a2d239762e184a01912729b69a91016df06b7ba3933aa88de48c782,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728918461771646799,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16ad1f7c7ca791817a445f2eb5192551,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62baf067c7938f117bd93de058d98f004012ebe1fcf8caae9b96bcc24016757b,PodSandboxId:3eafb95cf605fed076cb225caa76fd763cdc9bb555510b96b096e6bd270b52f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728918174775447701,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a005d01945afa403b756193f11f3824f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9682b216-600e-4e49-89cc-ad22850ff9a9 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:17:03 no-preload-813300 crio[711]: time="2024-10-14 15:17:03.409318903Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a23edac3-06fe-4417-bbea-15a553a0df18 name=/runtime.v1.RuntimeService/Version
	Oct 14 15:17:03 no-preload-813300 crio[711]: time="2024-10-14 15:17:03.409425125Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a23edac3-06fe-4417-bbea-15a553a0df18 name=/runtime.v1.RuntimeService/Version
	Oct 14 15:17:03 no-preload-813300 crio[711]: time="2024-10-14 15:17:03.410944018Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4d341ef8-6900-4700-abd6-02d80ea3d0c9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:17:03 no-preload-813300 crio[711]: time="2024-10-14 15:17:03.411371117Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919023411339143,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4d341ef8-6900-4700-abd6-02d80ea3d0c9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:17:03 no-preload-813300 crio[711]: time="2024-10-14 15:17:03.412008661Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0adba91a-f34a-4341-88af-181d7759580b name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:17:03 no-preload-813300 crio[711]: time="2024-10-14 15:17:03.412061308Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0adba91a-f34a-4341-88af-181d7759580b name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:17:03 no-preload-813300 crio[711]: time="2024-10-14 15:17:03.412235250Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2fe5212fe3ebb4271fe4f9776bdc95ea7bbd4aea70456281189b86f4d9323675,PodSandboxId:f65d8d057416b9747163461409fb02e63baaccda87f3ed4616b7db10021cb917,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728918474205353910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d79bfdf-bda5-42bf-8ddf-73d7df4855db,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03f2753934798cf6791abe08ae9795458185ad9eac0059ead3d1cb94cd908b3d,PodSandboxId:a2836801e53a0d69e53404fa44e5147c5184d5687c2b8897aac5baefa29d07c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728918473444794126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nvpvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d926987d-9c61-4bf6-83e3-97334715e1d5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:739b2529f0fdf4a73a5502b2fc856d948eccb8dbbae56a6b9d08c0413c0279ad,PodSandboxId:6a29826877e8796338270b89184f647f0265e8cf65ed745db5eef2d9500d98a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728918473219503640,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fjzn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78
50936e-8104-4e8f-a4cc-948579963790,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:842c65533db30209091d4a7fd1a556d412dd451dfa31d61fa9b9090e674419a6,PodSandboxId:2da3b6fbd747bb62556d4178f5039822638d219381eb8488d84370503154cc03,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1728918472871037421,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-54rrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8ab0de-c204-46f5-a725-5dcd9eff59d8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6762e30b49a928c9391f017d8bf782823c7777cf1fab83d160db4ebf055e519c,PodSandboxId:d3a7cdecaacf24ac8239e552cacac1cfb68a89a56a497a73033d798ed1c5a708,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728918461828949591,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 808894b816cffed524db94d6e34a052d,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ac34ee741ac4e85fbce7777baead20a19c84896009f4671d0c3a9aa96182858,PodSandboxId:6ee6ad98eab10d0d44de29ae2a3704f4727b02d09dcfc02a80999ad6df9a778a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:172891846182623
6832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a005d01945afa403b756193f11f3824f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af736f6784dc6fdd444e1b9d9ab0c2c185a42d68085dcbe37a46cfec63664031,PodSandboxId:de8829f46ea7a44e25078557862449681d75c537522a6df279f3687225512725,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728918461760964338,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdbaadf2aa4ad3fc6f15ade30860d76d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:870d6b62c80ba13f1a28f47e62cae635ae185e0169f6fd474843642b7fd1b867,PodSandboxId:efda81733a2d239762e184a01912729b69a91016df06b7ba3933aa88de48c782,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728918461771646799,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16ad1f7c7ca791817a445f2eb5192551,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62baf067c7938f117bd93de058d98f004012ebe1fcf8caae9b96bcc24016757b,PodSandboxId:3eafb95cf605fed076cb225caa76fd763cdc9bb555510b96b096e6bd270b52f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728918174775447701,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a005d01945afa403b756193f11f3824f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0adba91a-f34a-4341-88af-181d7759580b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2fe5212fe3ebb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   f65d8d057416b       storage-provisioner
	03f2753934798       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   a2836801e53a0       coredns-7c65d6cfc9-nvpvl
	739b2529f0fdf       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   6a29826877e87       coredns-7c65d6cfc9-fjzn8
	842c65533db30       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   9 minutes ago       Running             kube-proxy                0                   2da3b6fbd747b       kube-proxy-54rrd
	6762e30b49a92       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   9 minutes ago       Running             kube-controller-manager   2                   d3a7cdecaacf2       kube-controller-manager-no-preload-813300
	5ac34ee741ac4       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   9 minutes ago       Running             kube-apiserver            2                   6ee6ad98eab10       kube-apiserver-no-preload-813300
	870d6b62c80ba       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   efda81733a2d2       etcd-no-preload-813300
	af736f6784dc6       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   9 minutes ago       Running             kube-scheduler            2                   de8829f46ea7a       kube-scheduler-no-preload-813300
	62baf067c7938       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Exited              kube-apiserver            1                   3eafb95cf605f       kube-apiserver-no-preload-813300
	
	
	==> coredns [03f2753934798cf6791abe08ae9795458185ad9eac0059ead3d1cb94cd908b3d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [739b2529f0fdf4a73a5502b2fc856d948eccb8dbbae56a6b9d08c0413c0279ad] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-813300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-813300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=no-preload-813300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_14T15_07_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 15:07:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-813300
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 15:16:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 15:13:04 +0000   Mon, 14 Oct 2024 15:07:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 15:13:04 +0000   Mon, 14 Oct 2024 15:07:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 15:13:04 +0000   Mon, 14 Oct 2024 15:07:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 15:13:04 +0000   Mon, 14 Oct 2024 15:07:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.13
	  Hostname:    no-preload-813300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 06200cbdb49d457f814d09539b06f86f
	  System UUID:                06200cbd-b49d-457f-814d-09539b06f86f
	  Boot ID:                    45284b4f-e486-4be9-914a-4c32f145bb44
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-fjzn8                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m11s
	  kube-system                 coredns-7c65d6cfc9-nvpvl                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m11s
	  kube-system                 etcd-no-preload-813300                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m16s
	  kube-system                 kube-apiserver-no-preload-813300             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 kube-controller-manager-no-preload-813300    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m18s
	  kube-system                 kube-proxy-54rrd                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m11s
	  kube-system                 kube-scheduler-no-preload-813300             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m16s
	  kube-system                 metrics-server-6867b74b74-8vfll              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m10s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m9s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m22s (x8 over 9m22s)  kubelet          Node no-preload-813300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m22s (x8 over 9m22s)  kubelet          Node no-preload-813300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m22s (x7 over 9m22s)  kubelet          Node no-preload-813300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m16s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m16s                  kubelet          Node no-preload-813300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m16s                  kubelet          Node no-preload-813300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m16s                  kubelet          Node no-preload-813300 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m12s                  node-controller  Node no-preload-813300 event: Registered Node no-preload-813300 in Controller
	
	
	==> dmesg <==
	[  +0.056720] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041792] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.330181] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.720417] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.595444] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.623929] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.063621] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053460] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.170675] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.149821] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.280489] systemd-fstab-generator[703]: Ignoring "noauto" option for root device
	[ +15.664271] systemd-fstab-generator[1236]: Ignoring "noauto" option for root device
	[  +0.063930] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.831964] systemd-fstab-generator[1357]: Ignoring "noauto" option for root device
	[  +5.638392] kauditd_printk_skb: 100 callbacks suppressed
	[Oct14 15:03] kauditd_printk_skb: 87 callbacks suppressed
	[Oct14 15:07] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.180041] systemd-fstab-generator[3040]: Ignoring "noauto" option for root device
	[  +4.389658] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.658491] systemd-fstab-generator[3363]: Ignoring "noauto" option for root device
	[  +5.376177] systemd-fstab-generator[3475]: Ignoring "noauto" option for root device
	[  +0.124744] kauditd_printk_skb: 14 callbacks suppressed
	[  +7.376643] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [870d6b62c80ba13f1a28f47e62cae635ae185e0169f6fd474843642b7fd1b867] <==
	{"level":"info","ts":"2024-10-14T15:07:42.093128Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-14T15:07:42.093969Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"b42979a4111f16a1","initial-advertise-peer-urls":["https://192.168.61.13:2380"],"listen-peer-urls":["https://192.168.61.13:2380"],"advertise-client-urls":["https://192.168.61.13:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.13:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-14T15:07:42.094079Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-14T15:07:42.094182Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.61.13:2380"}
	{"level":"info","ts":"2024-10-14T15:07:42.094290Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.61.13:2380"}
	{"level":"info","ts":"2024-10-14T15:07:42.330741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b42979a4111f16a1 is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-14T15:07:42.330796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b42979a4111f16a1 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-14T15:07:42.330821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b42979a4111f16a1 received MsgPreVoteResp from b42979a4111f16a1 at term 1"}
	{"level":"info","ts":"2024-10-14T15:07:42.330832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b42979a4111f16a1 became candidate at term 2"}
	{"level":"info","ts":"2024-10-14T15:07:42.330840Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b42979a4111f16a1 received MsgVoteResp from b42979a4111f16a1 at term 2"}
	{"level":"info","ts":"2024-10-14T15:07:42.330848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b42979a4111f16a1 became leader at term 2"}
	{"level":"info","ts":"2024-10-14T15:07:42.330854Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b42979a4111f16a1 elected leader b42979a4111f16a1 at term 2"}
	{"level":"info","ts":"2024-10-14T15:07:42.334866Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T15:07:42.339007Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b42979a4111f16a1","local-member-attributes":"{Name:no-preload-813300 ClientURLs:[https://192.168.61.13:2379]}","request-path":"/0/members/b42979a4111f16a1/attributes","cluster-id":"bb1e88613a134efc","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-14T15:07:42.341738Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-14T15:07:42.342143Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-14T15:07:42.344746Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-14T15:07:42.344810Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-14T15:07:42.345554Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-14T15:07:42.348964Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.13:2379"}
	{"level":"info","ts":"2024-10-14T15:07:42.346738Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bb1e88613a134efc","local-member-id":"b42979a4111f16a1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T15:07:42.347293Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-14T15:07:42.357801Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T15:07:42.357875Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T15:07:42.359116Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 15:17:03 up 14 min,  0 users,  load average: 0.44, 0.18, 0.12
	Linux no-preload-813300 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5ac34ee741ac4e85fbce7777baead20a19c84896009f4671d0c3a9aa96182858] <==
	E1014 15:12:45.480220       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1014 15:12:45.480307       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1014 15:12:45.481747       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1014 15:12:45.481837       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1014 15:13:45.482550       1 handler_proxy.go:99] no RequestInfo found in the context
	E1014 15:13:45.482615       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1014 15:13:45.482786       1 handler_proxy.go:99] no RequestInfo found in the context
	E1014 15:13:45.482911       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1014 15:13:45.483769       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1014 15:13:45.484872       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1014 15:15:45.484485       1 handler_proxy.go:99] no RequestInfo found in the context
	E1014 15:15:45.484938       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1014 15:15:45.485086       1 handler_proxy.go:99] no RequestInfo found in the context
	E1014 15:15:45.485164       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1014 15:15:45.486559       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1014 15:15:45.486612       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [62baf067c7938f117bd93de058d98f004012ebe1fcf8caae9b96bcc24016757b] <==
	W1014 15:07:34.673062       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:34.677424       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:34.777558       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:34.794211       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:34.940348       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:35.078072       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:35.124333       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:35.164358       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:35.189067       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:35.233552       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:35.254041       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:35.289063       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:35.336030       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:35.343801       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:35.408071       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:35.433895       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:35.465939       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:35.473297       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:35.490253       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:35.512340       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:35.536906       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:35.998853       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:39.162045       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:39.360733       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:39.367426       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [6762e30b49a928c9391f017d8bf782823c7777cf1fab83d160db4ebf055e519c] <==
	E1014 15:11:51.358850       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:11:51.903043       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:12:21.365335       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:12:21.910955       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:12:51.374496       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:12:51.919403       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1014 15:13:04.270802       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-813300"
	E1014 15:13:21.380515       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:13:21.929134       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:13:51.387303       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:13:51.936648       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1014 15:13:57.186887       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="1.666948ms"
	I1014 15:14:11.181357       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="168.413µs"
	E1014 15:14:21.394046       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:14:21.947401       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:14:51.402595       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:14:51.956377       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:15:21.409415       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:15:21.964412       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:15:51.417400       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:15:51.977076       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:16:21.423993       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:16:21.986139       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:16:51.430980       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:16:51.994799       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [842c65533db30209091d4a7fd1a556d412dd451dfa31d61fa9b9090e674419a6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1014 15:07:53.714041       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1014 15:07:53.755632       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.13"]
	E1014 15:07:53.755916       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 15:07:53.980337       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1014 15:07:53.980400       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 15:07:53.980429       1 server_linux.go:169] "Using iptables Proxier"
	I1014 15:07:53.991604       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 15:07:53.992005       1 server.go:483] "Version info" version="v1.31.1"
	I1014 15:07:53.992036       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 15:07:53.993896       1 config.go:199] "Starting service config controller"
	I1014 15:07:53.993942       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 15:07:53.993978       1 config.go:105] "Starting endpoint slice config controller"
	I1014 15:07:53.993982       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 15:07:53.995137       1 config.go:328] "Starting node config controller"
	I1014 15:07:53.995206       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 15:07:54.103772       1 shared_informer.go:320] Caches are synced for node config
	I1014 15:07:54.103853       1 shared_informer.go:320] Caches are synced for service config
	I1014 15:07:54.103894       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [af736f6784dc6fdd444e1b9d9ab0c2c185a42d68085dcbe37a46cfec63664031] <==
	W1014 15:07:44.556786       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1014 15:07:44.556838       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 15:07:44.556874       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1014 15:07:44.556912       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 15:07:45.478583       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1014 15:07:45.478869       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 15:07:45.539627       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1014 15:07:45.539981       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 15:07:45.545198       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1014 15:07:45.545371       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1014 15:07:45.583910       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1014 15:07:45.584055       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 15:07:45.660857       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1014 15:07:45.661089       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1014 15:07:45.713881       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1014 15:07:45.713992       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 15:07:45.773580       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1014 15:07:45.773783       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 15:07:45.808430       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1014 15:07:45.808483       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1014 15:07:45.835530       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1014 15:07:45.835590       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 15:07:45.857456       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1014 15:07:45.857510       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 15:07:48.233732       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 14 15:15:54 no-preload-813300 kubelet[3370]: E1014 15:15:54.165926    3370 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8vfll" podUID="cf3594da-9896-49ed-b47f-5bbea36c9aaf"
	Oct 14 15:15:57 no-preload-813300 kubelet[3370]: E1014 15:15:57.290555    3370 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918957289994198,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:15:57 no-preload-813300 kubelet[3370]: E1014 15:15:57.290582    3370 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918957289994198,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:16:07 no-preload-813300 kubelet[3370]: E1014 15:16:07.165798    3370 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8vfll" podUID="cf3594da-9896-49ed-b47f-5bbea36c9aaf"
	Oct 14 15:16:07 no-preload-813300 kubelet[3370]: E1014 15:16:07.292182    3370 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918967291752306,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:16:07 no-preload-813300 kubelet[3370]: E1014 15:16:07.292441    3370 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918967291752306,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:16:17 no-preload-813300 kubelet[3370]: E1014 15:16:17.294717    3370 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918977294307851,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:16:17 no-preload-813300 kubelet[3370]: E1014 15:16:17.294757    3370 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918977294307851,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:16:19 no-preload-813300 kubelet[3370]: E1014 15:16:19.165310    3370 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8vfll" podUID="cf3594da-9896-49ed-b47f-5bbea36c9aaf"
	Oct 14 15:16:27 no-preload-813300 kubelet[3370]: E1014 15:16:27.296260    3370 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918987295946479,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:16:27 no-preload-813300 kubelet[3370]: E1014 15:16:27.296291    3370 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918987295946479,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:16:33 no-preload-813300 kubelet[3370]: E1014 15:16:33.165361    3370 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8vfll" podUID="cf3594da-9896-49ed-b47f-5bbea36c9aaf"
	Oct 14 15:16:37 no-preload-813300 kubelet[3370]: E1014 15:16:37.298020    3370 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918997297483671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:16:37 no-preload-813300 kubelet[3370]: E1014 15:16:37.298108    3370 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728918997297483671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:16:45 no-preload-813300 kubelet[3370]: E1014 15:16:45.164715    3370 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8vfll" podUID="cf3594da-9896-49ed-b47f-5bbea36c9aaf"
	Oct 14 15:16:47 no-preload-813300 kubelet[3370]: E1014 15:16:47.216996    3370 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 15:16:47 no-preload-813300 kubelet[3370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 15:16:47 no-preload-813300 kubelet[3370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 15:16:47 no-preload-813300 kubelet[3370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 15:16:47 no-preload-813300 kubelet[3370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 15:16:47 no-preload-813300 kubelet[3370]: E1014 15:16:47.301053    3370 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919007300387671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:16:47 no-preload-813300 kubelet[3370]: E1014 15:16:47.301087    3370 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919007300387671,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:16:57 no-preload-813300 kubelet[3370]: E1014 15:16:57.167893    3370 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8vfll" podUID="cf3594da-9896-49ed-b47f-5bbea36c9aaf"
	Oct 14 15:16:57 no-preload-813300 kubelet[3370]: E1014 15:16:57.302570    3370 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919017302243896,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:16:57 no-preload-813300 kubelet[3370]: E1014 15:16:57.302632    3370 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919017302243896,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [2fe5212fe3ebb4271fe4f9776bdc95ea7bbd4aea70456281189b86f4d9323675] <==
	I1014 15:07:54.415632       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1014 15:07:54.431382       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1014 15:07:54.431443       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1014 15:07:54.456955       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1014 15:07:54.477136       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"544b6cc6-ec59-4c30-9bdb-e6b0c42eb5fd", APIVersion:"v1", ResourceVersion:"433", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-813300_333f6081-642e-4e06-a2d9-fe0ec4a4ed66 became leader
	I1014 15:07:54.477626       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-813300_333f6081-642e-4e06-a2d9-fe0ec4a4ed66!
	I1014 15:07:54.578836       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-813300_333f6081-642e-4e06-a2d9-fe0ec4a4ed66!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-813300 -n no-preload-813300
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-813300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-8vfll
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-813300 describe pod metrics-server-6867b74b74-8vfll
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-813300 describe pod metrics-server-6867b74b74-8vfll: exit status 1 (64.089126ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-8vfll" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-813300 describe pod metrics-server-6867b74b74-8vfll: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
E1014 15:10:50.859134   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/auto-517678/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
E1014 15:11:06.401178   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
E1014 15:11:07.152753   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/calico-517678/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
E1014 15:11:22.162264   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/custom-flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
E1014 15:11:52.522875   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kindnet-517678/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
E1014 15:12:01.835904   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/enable-default-cni-517678/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
E1014 15:12:30.219772   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/calico-517678/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
E1014 15:12:38.241158   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
E1014 15:12:45.225614   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/custom-flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
E1014 15:12:49.187440   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/bridge-517678/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
E1014 15:13:24.902356   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/enable-default-cni-517678/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
E1014 15:13:36.994483   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/functional-917108/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
E1014 15:14:01.303477   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
E1014 15:14:09.474422   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
E1014 15:14:12.252112   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/bridge-517678/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
E1014 15:14:27.794233   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/auto-517678/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
E1014 15:15:29.459855   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kindnet-517678/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
E1014 15:16:06.401725   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
E1014 15:16:07.152689   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/calico-517678/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
E1014 15:16:22.162284   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/custom-flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
E1014 15:17:38.240766   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
E1014 15:17:49.187651   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/bridge-517678/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
E1014 15:18:36.993855   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/functional-917108/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
E1014 15:19:27.795129   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/auto-517678/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-399767 -n old-k8s-version-399767
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-399767 -n old-k8s-version-399767: exit status 2 (231.350777ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-399767" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-399767 -n old-k8s-version-399767
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-399767 -n old-k8s-version-399767: exit status 2 (229.541502ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-399767 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-399767 logs -n 25: (1.572458778s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-517678 sudo cat                              | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-517678 sudo                                  | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-517678 sudo                                  | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-517678 sudo                                  | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-517678 sudo find                             | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-517678 sudo crio                             | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-517678                                       | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	| delete  | -p                                                     | disable-driver-mounts-887610 | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | disable-driver-mounts-887610                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:55 UTC |
	|         | default-k8s-diff-port-201291                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-813300             | no-preload-813300            | jenkins | v1.34.0 | 14 Oct 24 14:54 UTC | 14 Oct 24 14:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-813300                                   | no-preload-813300            | jenkins | v1.34.0 | 14 Oct 24 14:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-989166            | embed-certs-989166           | jenkins | v1.34.0 | 14 Oct 24 14:54 UTC | 14 Oct 24 14:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-989166                                  | embed-certs-989166           | jenkins | v1.34.0 | 14 Oct 24 14:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-201291  | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:55 UTC | 14 Oct 24 14:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:55 UTC |                     |
	|         | default-k8s-diff-port-201291                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-813300                  | no-preload-813300            | jenkins | v1.34.0 | 14 Oct 24 14:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-813300                                   | no-preload-813300            | jenkins | v1.34.0 | 14 Oct 24 14:56 UTC | 14 Oct 24 15:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-399767        | old-k8s-version-399767       | jenkins | v1.34.0 | 14 Oct 24 14:56 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-989166                 | embed-certs-989166           | jenkins | v1.34.0 | 14 Oct 24 14:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-989166                                  | embed-certs-989166           | jenkins | v1.34.0 | 14 Oct 24 14:57 UTC | 14 Oct 24 15:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-201291       | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:57 UTC | 14 Oct 24 15:06 UTC |
	|         | default-k8s-diff-port-201291                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-399767                              | old-k8s-version-399767       | jenkins | v1.34.0 | 14 Oct 24 14:58 UTC | 14 Oct 24 14:58 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-399767             | old-k8s-version-399767       | jenkins | v1.34.0 | 14 Oct 24 14:58 UTC | 14 Oct 24 14:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-399767                              | old-k8s-version-399767       | jenkins | v1.34.0 | 14 Oct 24 14:58 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 14:58:18
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 14:58:18.000027   72639 out.go:345] Setting OutFile to fd 1 ...
	I1014 14:58:18.000165   72639 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:58:18.000176   72639 out.go:358] Setting ErrFile to fd 2...
	I1014 14:58:18.000189   72639 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:58:18.000390   72639 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
	I1014 14:58:18.000911   72639 out.go:352] Setting JSON to false
	I1014 14:58:18.001828   72639 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6048,"bootTime":1728911850,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 14:58:18.001919   72639 start.go:139] virtualization: kvm guest
	I1014 14:58:18.004056   72639 out.go:177] * [old-k8s-version-399767] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1014 14:58:18.005382   72639 notify.go:220] Checking for updates...
	I1014 14:58:18.005437   72639 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 14:58:18.006939   72639 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 14:58:18.008275   72639 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 14:58:18.009565   72639 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 14:58:18.010773   72639 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 14:58:18.011941   72639 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 14:58:18.013472   72639 config.go:182] Loaded profile config "old-k8s-version-399767": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1014 14:58:18.013833   72639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:58:18.013892   72639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:58:18.028372   72639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44497
	I1014 14:58:18.028786   72639 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:58:18.029355   72639 main.go:141] libmachine: Using API Version  1
	I1014 14:58:18.029375   72639 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:58:18.029671   72639 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:58:18.029827   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 14:58:18.031644   72639 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1014 14:58:18.033229   72639 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 14:58:18.033524   72639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:58:18.033565   72639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:58:18.048210   72639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34273
	I1014 14:58:18.048620   72639 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:58:18.049080   72639 main.go:141] libmachine: Using API Version  1
	I1014 14:58:18.049102   72639 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:58:18.049377   72639 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:58:18.049550   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 14:58:18.084664   72639 out.go:177] * Using the kvm2 driver based on existing profile
	I1014 14:58:18.085942   72639 start.go:297] selected driver: kvm2
	I1014 14:58:18.085952   72639 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-399767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-399767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.138 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 14:58:18.086042   72639 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 14:58:18.086707   72639 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 14:58:18.086795   72639 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19790-7836/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 14:58:18.101802   72639 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1014 14:58:18.102194   72639 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 14:58:18.102224   72639 cni.go:84] Creating CNI manager for ""
	I1014 14:58:18.102263   72639 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 14:58:18.102315   72639 start.go:340] cluster config:
	{Name:old-k8s-version-399767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-399767 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.138 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 14:58:18.102441   72639 iso.go:125] acquiring lock: {Name:mk2e2e780b05ead4007a93e6b56d28c6081926bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 14:58:18.105418   72639 out.go:177] * Starting "old-k8s-version-399767" primary control-plane node in "old-k8s-version-399767" cluster
	I1014 14:58:16.182868   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:18.106656   72639 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1014 14:58:18.106696   72639 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1014 14:58:18.106708   72639 cache.go:56] Caching tarball of preloaded images
	I1014 14:58:18.106790   72639 preload.go:172] Found /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 14:58:18.106800   72639 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1014 14:58:18.106889   72639 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/config.json ...
	I1014 14:58:18.107063   72639 start.go:360] acquireMachinesLock for old-k8s-version-399767: {Name:mk0b928e3a7c606d1470b2064477a22c5e59969d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 14:58:22.262902   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:25.334877   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:31.414867   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:34.486863   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:40.566883   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:43.638929   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:49.718856   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:52.790946   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:58.870883   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:01.942844   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:08.022831   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:11.094893   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:17.174897   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:20.246818   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:26.326911   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:29.398852   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:35.478877   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:38.550829   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:44.630857   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:47.702856   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:53.782842   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:56.854890   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:02.934894   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:06.006879   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:12.086905   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:15.158856   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:21.238905   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:24.310889   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:30.390878   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:33.462909   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:39.542866   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:42.614929   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:48.694859   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:51.766865   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:57.846913   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:01:00.918859   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:01:06.998892   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:01:10.070810   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:01:13.075950   72173 start.go:364] duration metric: took 3m43.687804446s to acquireMachinesLock for "embed-certs-989166"
	I1014 15:01:13.076005   72173 start.go:96] Skipping create...Using existing machine configuration
	I1014 15:01:13.076011   72173 fix.go:54] fixHost starting: 
	I1014 15:01:13.076341   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:01:13.076386   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:01:13.092168   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41771
	I1014 15:01:13.092686   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:01:13.093180   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:01:13.093204   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:01:13.093560   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:01:13.093749   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:13.093889   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetState
	I1014 15:01:13.095639   72173 fix.go:112] recreateIfNeeded on embed-certs-989166: state=Stopped err=<nil>
	I1014 15:01:13.095665   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	W1014 15:01:13.095827   72173 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 15:01:13.097909   72173 out.go:177] * Restarting existing kvm2 VM for "embed-certs-989166" ...
	I1014 15:01:13.099253   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Start
	I1014 15:01:13.099433   72173 main.go:141] libmachine: (embed-certs-989166) Ensuring networks are active...
	I1014 15:01:13.100328   72173 main.go:141] libmachine: (embed-certs-989166) Ensuring network default is active
	I1014 15:01:13.100683   72173 main.go:141] libmachine: (embed-certs-989166) Ensuring network mk-embed-certs-989166 is active
	I1014 15:01:13.101062   72173 main.go:141] libmachine: (embed-certs-989166) Getting domain xml...
	I1014 15:01:13.101867   72173 main.go:141] libmachine: (embed-certs-989166) Creating domain...
	I1014 15:01:13.073323   71679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 15:01:13.073356   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetMachineName
	I1014 15:01:13.073658   71679 buildroot.go:166] provisioning hostname "no-preload-813300"
	I1014 15:01:13.073682   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetMachineName
	I1014 15:01:13.073854   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:01:13.075825   71679 machine.go:96] duration metric: took 4m37.425006s to provisionDockerMachine
	I1014 15:01:13.075866   71679 fix.go:56] duration metric: took 4m37.446829923s for fixHost
	I1014 15:01:13.075872   71679 start.go:83] releasing machines lock for "no-preload-813300", held for 4m37.446848059s
	W1014 15:01:13.075889   71679 start.go:714] error starting host: provision: host is not running
	W1014 15:01:13.075983   71679 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1014 15:01:13.075992   71679 start.go:729] Will try again in 5 seconds ...
	I1014 15:01:14.319338   72173 main.go:141] libmachine: (embed-certs-989166) Waiting to get IP...
	I1014 15:01:14.320167   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:14.320582   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:14.320651   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:14.320577   73268 retry.go:31] will retry after 213.073722ms: waiting for machine to come up
	I1014 15:01:14.534913   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:14.535353   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:14.535375   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:14.535306   73268 retry.go:31] will retry after 316.205029ms: waiting for machine to come up
	I1014 15:01:14.852769   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:14.853201   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:14.853261   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:14.853201   73268 retry.go:31] will retry after 399.414867ms: waiting for machine to come up
	I1014 15:01:15.253657   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:15.253955   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:15.253979   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:15.253917   73268 retry.go:31] will retry after 537.097034ms: waiting for machine to come up
	I1014 15:01:15.792362   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:15.792736   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:15.792763   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:15.792703   73268 retry.go:31] will retry after 594.582114ms: waiting for machine to come up
	I1014 15:01:16.388419   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:16.388838   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:16.388869   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:16.388793   73268 retry.go:31] will retry after 814.814512ms: waiting for machine to come up
	I1014 15:01:17.204791   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:17.205229   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:17.205255   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:17.205176   73268 retry.go:31] will retry after 971.673961ms: waiting for machine to come up
	I1014 15:01:18.178701   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:18.179100   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:18.179130   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:18.179048   73268 retry.go:31] will retry after 941.576822ms: waiting for machine to come up
	I1014 15:01:19.122097   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:19.122488   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:19.122514   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:19.122453   73268 retry.go:31] will retry after 1.5308999s: waiting for machine to come up
	I1014 15:01:18.077601   71679 start.go:360] acquireMachinesLock for no-preload-813300: {Name:mk0b928e3a7c606d1470b2064477a22c5e59969d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 15:01:20.655098   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:20.655524   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:20.655550   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:20.655475   73268 retry.go:31] will retry after 1.590510545s: waiting for machine to come up
	I1014 15:01:22.248128   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:22.248551   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:22.248572   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:22.248511   73268 retry.go:31] will retry after 1.965898839s: waiting for machine to come up
	I1014 15:01:24.215742   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:24.216187   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:24.216240   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:24.216136   73268 retry.go:31] will retry after 3.476459931s: waiting for machine to come up
	I1014 15:01:27.696804   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:27.697201   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:27.697254   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:27.697175   73268 retry.go:31] will retry after 3.212757582s: waiting for machine to come up
	I1014 15:01:32.235659   72390 start.go:364] duration metric: took 3m35.715993521s to acquireMachinesLock for "default-k8s-diff-port-201291"
	I1014 15:01:32.235710   72390 start.go:96] Skipping create...Using existing machine configuration
	I1014 15:01:32.235718   72390 fix.go:54] fixHost starting: 
	I1014 15:01:32.236084   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:01:32.236134   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:01:32.253294   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46045
	I1014 15:01:32.253760   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:01:32.254255   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:01:32.254275   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:01:32.254616   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:01:32.254797   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:32.254973   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetState
	I1014 15:01:32.256494   72390 fix.go:112] recreateIfNeeded on default-k8s-diff-port-201291: state=Stopped err=<nil>
	I1014 15:01:32.256523   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	W1014 15:01:32.256683   72390 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 15:01:32.258989   72390 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-201291" ...
	I1014 15:01:30.911781   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:30.912283   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has current primary IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:30.912313   72173 main.go:141] libmachine: (embed-certs-989166) Found IP for machine: 192.168.39.41
	I1014 15:01:30.912331   72173 main.go:141] libmachine: (embed-certs-989166) Reserving static IP address...
	I1014 15:01:30.912771   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "embed-certs-989166", mac: "52:54:00:ee:96:19", ip: "192.168.39.41"} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:30.912799   72173 main.go:141] libmachine: (embed-certs-989166) DBG | skip adding static IP to network mk-embed-certs-989166 - found existing host DHCP lease matching {name: "embed-certs-989166", mac: "52:54:00:ee:96:19", ip: "192.168.39.41"}
	I1014 15:01:30.912806   72173 main.go:141] libmachine: (embed-certs-989166) Reserved static IP address: 192.168.39.41
	I1014 15:01:30.912815   72173 main.go:141] libmachine: (embed-certs-989166) Waiting for SSH to be available...
	I1014 15:01:30.912822   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Getting to WaitForSSH function...
	I1014 15:01:30.914919   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:30.915273   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:30.915310   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:30.915392   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Using SSH client type: external
	I1014 15:01:30.915414   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa (-rw-------)
	I1014 15:01:30.915465   72173 main.go:141] libmachine: (embed-certs-989166) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.41 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 15:01:30.915489   72173 main.go:141] libmachine: (embed-certs-989166) DBG | About to run SSH command:
	I1014 15:01:30.915503   72173 main.go:141] libmachine: (embed-certs-989166) DBG | exit 0
	I1014 15:01:31.042620   72173 main.go:141] libmachine: (embed-certs-989166) DBG | SSH cmd err, output: <nil>: 
	I1014 15:01:31.043061   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetConfigRaw
	I1014 15:01:31.043675   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetIP
	I1014 15:01:31.046338   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.046679   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.046720   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.046941   72173 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/config.json ...
	I1014 15:01:31.047132   72173 machine.go:93] provisionDockerMachine start ...
	I1014 15:01:31.047149   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:31.047348   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.049453   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.049835   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.049857   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.049978   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:31.050147   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.050282   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.050419   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:31.050573   72173 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:31.050814   72173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1014 15:01:31.050828   72173 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 15:01:31.163270   72173 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 15:01:31.163306   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetMachineName
	I1014 15:01:31.163614   72173 buildroot.go:166] provisioning hostname "embed-certs-989166"
	I1014 15:01:31.163644   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetMachineName
	I1014 15:01:31.163821   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.166684   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.167009   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.167040   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.167157   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:31.167416   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.167582   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.167718   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:31.167857   72173 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:31.168025   72173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1014 15:01:31.168040   72173 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-989166 && echo "embed-certs-989166" | sudo tee /etc/hostname
	I1014 15:01:31.292369   72173 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-989166
	
	I1014 15:01:31.292405   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.295057   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.295425   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.295449   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.295713   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:31.295915   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.296076   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.296220   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:31.296395   72173 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:31.296552   72173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1014 15:01:31.296567   72173 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-989166' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-989166/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-989166' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 15:01:31.411080   72173 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 15:01:31.411112   72173 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 15:01:31.411131   72173 buildroot.go:174] setting up certificates
	I1014 15:01:31.411142   72173 provision.go:84] configureAuth start
	I1014 15:01:31.411150   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetMachineName
	I1014 15:01:31.411396   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetIP
	I1014 15:01:31.413972   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.414319   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.414341   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.414502   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.416775   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.417092   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.417113   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.417278   72173 provision.go:143] copyHostCerts
	I1014 15:01:31.417340   72173 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 15:01:31.417353   72173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 15:01:31.417437   72173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 15:01:31.417549   72173 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 15:01:31.417559   72173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 15:01:31.417600   72173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 15:01:31.417677   72173 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 15:01:31.417687   72173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 15:01:31.417721   72173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 15:01:31.417788   72173 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.embed-certs-989166 san=[127.0.0.1 192.168.39.41 embed-certs-989166 localhost minikube]
	I1014 15:01:31.599973   72173 provision.go:177] copyRemoteCerts
	I1014 15:01:31.600034   72173 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 15:01:31.600060   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.602964   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.603270   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.603296   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.603502   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:31.603665   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.603821   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:31.603949   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:01:31.688890   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 15:01:31.713474   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1014 15:01:31.737692   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 15:01:31.760955   72173 provision.go:87] duration metric: took 349.799595ms to configureAuth
	I1014 15:01:31.760986   72173 buildroot.go:189] setting minikube options for container-runtime
	I1014 15:01:31.761172   72173 config.go:182] Loaded profile config "embed-certs-989166": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 15:01:31.761244   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.763800   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.764149   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.764181   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.764339   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:31.764494   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.764636   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.764732   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:31.764852   72173 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:31.765002   72173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1014 15:01:31.765016   72173 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 15:01:31.992783   72173 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 15:01:31.992811   72173 machine.go:96] duration metric: took 945.667058ms to provisionDockerMachine
	I1014 15:01:31.992823   72173 start.go:293] postStartSetup for "embed-certs-989166" (driver="kvm2")
	I1014 15:01:31.992834   72173 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 15:01:31.992848   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:31.993203   72173 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 15:01:31.993235   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.995966   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.996380   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.996418   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.996538   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:31.996714   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.996864   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:31.997003   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:01:32.081931   72173 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 15:01:32.086191   72173 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 15:01:32.086218   72173 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 15:01:32.086287   72173 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 15:01:32.086368   72173 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 15:01:32.086455   72173 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 15:01:32.096414   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:01:32.120348   72173 start.go:296] duration metric: took 127.509685ms for postStartSetup
	I1014 15:01:32.120392   72173 fix.go:56] duration metric: took 19.044380323s for fixHost
	I1014 15:01:32.120412   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:32.123024   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.123435   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:32.123465   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.123649   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:32.123832   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:32.123986   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:32.124152   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:32.124288   72173 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:32.124487   72173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1014 15:01:32.124502   72173 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 15:01:32.235487   72173 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728918092.208431219
	
	I1014 15:01:32.235513   72173 fix.go:216] guest clock: 1728918092.208431219
	I1014 15:01:32.235522   72173 fix.go:229] Guest: 2024-10-14 15:01:32.208431219 +0000 UTC Remote: 2024-10-14 15:01:32.12039587 +0000 UTC m=+242.874215269 (delta=88.035349ms)
	I1014 15:01:32.235559   72173 fix.go:200] guest clock delta is within tolerance: 88.035349ms
	I1014 15:01:32.235572   72173 start.go:83] releasing machines lock for "embed-certs-989166", held for 19.159587089s
	I1014 15:01:32.235600   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:32.235877   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetIP
	I1014 15:01:32.238642   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.238995   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:32.239025   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.239175   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:32.239719   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:32.239891   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:32.239978   72173 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 15:01:32.240031   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:32.240091   72173 ssh_runner.go:195] Run: cat /version.json
	I1014 15:01:32.240115   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:32.242742   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.243102   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:32.243128   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.243177   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.243275   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:32.243482   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:32.243653   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:32.243664   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:32.243676   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.243811   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:32.243822   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:01:32.243929   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:32.244050   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:32.244168   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:01:32.357542   72173 ssh_runner.go:195] Run: systemctl --version
	I1014 15:01:32.365113   72173 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 15:01:32.510557   72173 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 15:01:32.516545   72173 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 15:01:32.516628   72173 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 15:01:32.533449   72173 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 15:01:32.533473   72173 start.go:495] detecting cgroup driver to use...
	I1014 15:01:32.533549   72173 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 15:01:32.549503   72173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 15:01:32.563126   72173 docker.go:217] disabling cri-docker service (if available) ...
	I1014 15:01:32.563184   72173 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 15:01:32.576972   72173 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 15:01:32.591047   72173 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 15:01:32.704839   72173 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 15:01:32.844770   72173 docker.go:233] disabling docker service ...
	I1014 15:01:32.844855   72173 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 15:01:32.859524   72173 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 15:01:32.872297   72173 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 15:01:33.014291   72173 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 15:01:33.136889   72173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 15:01:33.151656   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 15:01:33.170504   72173 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 15:01:33.170575   72173 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.180894   72173 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 15:01:33.180968   72173 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.192268   72173 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.203509   72173 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.215958   72173 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 15:01:33.227981   72173 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.241615   72173 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.261168   72173 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.273098   72173 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 15:01:33.284050   72173 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 15:01:33.284225   72173 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 15:01:33.299547   72173 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 15:01:33.310259   72173 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:01:33.426563   72173 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 15:01:33.526759   72173 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 15:01:33.526817   72173 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 15:01:33.532297   72173 start.go:563] Will wait 60s for crictl version
	I1014 15:01:33.532356   72173 ssh_runner.go:195] Run: which crictl
	I1014 15:01:33.536385   72173 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 15:01:33.576222   72173 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 15:01:33.576305   72173 ssh_runner.go:195] Run: crio --version
	I1014 15:01:33.604603   72173 ssh_runner.go:195] Run: crio --version
	I1014 15:01:33.636261   72173 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1014 15:01:33.637497   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetIP
	I1014 15:01:33.640450   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:33.640768   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:33.640806   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:33.641001   72173 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1014 15:01:33.645241   72173 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:01:33.658028   72173 kubeadm.go:883] updating cluster {Name:embed-certs-989166 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-989166 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 15:01:33.658181   72173 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 15:01:33.658246   72173 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:01:33.695188   72173 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1014 15:01:33.695261   72173 ssh_runner.go:195] Run: which lz4
	I1014 15:01:33.699735   72173 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 15:01:33.704540   72173 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 15:01:33.704576   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1014 15:01:32.260401   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Start
	I1014 15:01:32.260569   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Ensuring networks are active...
	I1014 15:01:32.261176   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Ensuring network default is active
	I1014 15:01:32.261498   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Ensuring network mk-default-k8s-diff-port-201291 is active
	I1014 15:01:32.261795   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Getting domain xml...
	I1014 15:01:32.262414   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Creating domain...
	I1014 15:01:33.520115   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting to get IP...
	I1014 15:01:33.521127   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:33.521518   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:33.521609   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:33.521520   73405 retry.go:31] will retry after 278.409801ms: waiting for machine to come up
	I1014 15:01:33.802289   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:33.802720   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:33.802744   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:33.802688   73405 retry.go:31] will retry after 362.923826ms: waiting for machine to come up
	I1014 15:01:34.167836   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:34.168228   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:34.168273   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:34.168163   73405 retry.go:31] will retry after 315.156371ms: waiting for machine to come up
	I1014 15:01:34.485445   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:34.485855   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:34.485876   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:34.485840   73405 retry.go:31] will retry after 573.46626ms: waiting for machine to come up
	I1014 15:01:35.061472   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:35.061997   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:35.062027   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:35.061965   73405 retry.go:31] will retry after 519.420022ms: waiting for machine to come up
	I1014 15:01:35.582694   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:35.583130   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:35.583155   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:35.583062   73405 retry.go:31] will retry after 661.055324ms: waiting for machine to come up
	I1014 15:01:36.245525   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:36.245876   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:36.245902   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:36.245834   73405 retry.go:31] will retry after 870.411428ms: waiting for machine to come up
	I1014 15:01:35.120269   72173 crio.go:462] duration metric: took 1.42058504s to copy over tarball
	I1014 15:01:35.120372   72173 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 15:01:37.206126   72173 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.08572724s)
	I1014 15:01:37.206168   72173 crio.go:469] duration metric: took 2.085859852s to extract the tarball
	I1014 15:01:37.206176   72173 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 15:01:37.243007   72173 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:01:37.289639   72173 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 15:01:37.289667   72173 cache_images.go:84] Images are preloaded, skipping loading
	I1014 15:01:37.289678   72173 kubeadm.go:934] updating node { 192.168.39.41 8443 v1.31.1 crio true true} ...
	I1014 15:01:37.289793   72173 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-989166 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.41
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-989166 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 15:01:37.289878   72173 ssh_runner.go:195] Run: crio config
	I1014 15:01:37.348641   72173 cni.go:84] Creating CNI manager for ""
	I1014 15:01:37.348665   72173 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:01:37.348684   72173 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 15:01:37.348711   72173 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.41 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-989166 NodeName:embed-certs-989166 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.41"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.41 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 15:01:37.348861   72173 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.41
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-989166"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.41"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.41"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 15:01:37.348925   72173 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 15:01:37.359204   72173 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 15:01:37.359272   72173 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 15:01:37.368810   72173 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1014 15:01:37.385402   72173 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 15:01:37.401828   72173 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1014 15:01:37.418811   72173 ssh_runner.go:195] Run: grep 192.168.39.41	control-plane.minikube.internal$ /etc/hosts
	I1014 15:01:37.422655   72173 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.41	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:01:37.434567   72173 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:01:37.561408   72173 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:01:37.579549   72173 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166 for IP: 192.168.39.41
	I1014 15:01:37.579577   72173 certs.go:194] generating shared ca certs ...
	I1014 15:01:37.579596   72173 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:01:37.579766   72173 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 15:01:37.579878   72173 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 15:01:37.579894   72173 certs.go:256] generating profile certs ...
	I1014 15:01:37.579998   72173 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/client.key
	I1014 15:01:37.580079   72173 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/apiserver.key.8939f8c2
	I1014 15:01:37.580148   72173 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/proxy-client.key
	I1014 15:01:37.580316   72173 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 15:01:37.580364   72173 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 15:01:37.580376   72173 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 15:01:37.580413   72173 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 15:01:37.580445   72173 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 15:01:37.580482   72173 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 15:01:37.580536   72173 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:01:37.581259   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 15:01:37.632130   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 15:01:37.678608   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 15:01:37.705377   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 15:01:37.731897   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1014 15:01:37.775043   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 15:01:37.801653   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 15:01:37.826547   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 15:01:37.852086   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 15:01:37.878715   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 15:01:37.905883   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 15:01:37.932458   72173 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 15:01:37.951362   72173 ssh_runner.go:195] Run: openssl version
	I1014 15:01:37.957730   72173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 15:01:37.969936   72173 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:01:37.974871   72173 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:01:37.974931   72173 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:01:37.981060   72173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 15:01:37.992086   72173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 15:01:38.003528   72173 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 15:01:38.008267   72173 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 15:01:38.008332   72173 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 15:01:38.014243   72173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 15:01:38.025272   72173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 15:01:38.036191   72173 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 15:01:38.040751   72173 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 15:01:38.040804   72173 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 15:01:38.046605   72173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 15:01:38.057815   72173 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 15:01:38.062497   72173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 15:01:38.068889   72173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 15:01:38.075278   72173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 15:01:38.081663   72173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 15:01:38.087892   72173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 15:01:38.093748   72173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 15:01:38.099807   72173 kubeadm.go:392] StartCluster: {Name:embed-certs-989166 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-989166 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 15:01:38.099912   72173 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 15:01:38.099960   72173 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:01:38.140896   72173 cri.go:89] found id: ""
	I1014 15:01:38.140973   72173 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 15:01:38.151443   72173 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1014 15:01:38.151462   72173 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1014 15:01:38.151512   72173 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 15:01:38.161419   72173 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 15:01:38.162357   72173 kubeconfig.go:125] found "embed-certs-989166" server: "https://192.168.39.41:8443"
	I1014 15:01:38.164328   72173 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 15:01:38.174731   72173 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.41
	I1014 15:01:38.174767   72173 kubeadm.go:1160] stopping kube-system containers ...
	I1014 15:01:38.174782   72173 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1014 15:01:38.174849   72173 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:01:38.214903   72173 cri.go:89] found id: ""
	I1014 15:01:38.214982   72173 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1014 15:01:38.232891   72173 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:01:38.242711   72173 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:01:38.242735   72173 kubeadm.go:157] found existing configuration files:
	
	I1014 15:01:38.242793   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:01:38.251939   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:01:38.252019   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:01:38.262108   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:01:38.271688   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:01:38.271751   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:01:38.281420   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:01:38.290693   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:01:38.290752   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:01:38.300205   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:01:38.309174   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:01:38.309236   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:01:38.318616   72173 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:01:38.328337   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:38.436297   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:37.118307   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:37.118744   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:37.118784   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:37.118706   73405 retry.go:31] will retry after 1.481454557s: waiting for machine to come up
	I1014 15:01:38.601780   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:38.602168   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:38.602212   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:38.602118   73405 retry.go:31] will retry after 1.22705177s: waiting for machine to come up
	I1014 15:01:39.831413   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:39.831889   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:39.831963   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:39.831838   73405 retry.go:31] will retry after 1.898722681s: waiting for machine to come up
	I1014 15:01:39.574410   72173 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.138075676s)
	I1014 15:01:39.574444   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:39.789417   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:39.873563   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:40.011579   72173 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:01:40.011673   72173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:01:40.511877   72173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:01:41.012608   72173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:01:41.512235   72173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:01:42.012435   72173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:01:42.047878   72173 api_server.go:72] duration metric: took 2.036298602s to wait for apiserver process to appear ...
	I1014 15:01:42.047909   72173 api_server.go:88] waiting for apiserver healthz status ...
	I1014 15:01:42.047935   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:01:44.298692   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 15:01:44.298726   72173 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 15:01:44.298743   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:01:44.317315   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 15:01:44.317353   72173 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 15:01:44.548651   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:01:44.559477   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:01:44.559513   72173 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:01:45.048060   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:01:45.057070   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:01:45.057099   72173 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:01:45.548344   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:01:45.552611   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:01:45.552640   72173 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:01:46.048314   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:01:46.054943   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 200:
	ok
	I1014 15:01:46.062740   72173 api_server.go:141] control plane version: v1.31.1
	I1014 15:01:46.062769   72173 api_server.go:131] duration metric: took 4.014851988s to wait for apiserver health ...
	I1014 15:01:46.062779   72173 cni.go:84] Creating CNI manager for ""
	I1014 15:01:46.062785   72173 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:01:46.064824   72173 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 15:01:41.731928   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:41.732483   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:41.732515   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:41.732435   73405 retry.go:31] will retry after 2.349662063s: waiting for machine to come up
	I1014 15:01:44.083975   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:44.084492   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:44.084523   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:44.084437   73405 retry.go:31] will retry after 3.472214726s: waiting for machine to come up
	I1014 15:01:46.066505   72173 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 15:01:46.092975   72173 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 15:01:46.123873   72173 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 15:01:46.142575   72173 system_pods.go:59] 8 kube-system pods found
	I1014 15:01:46.142636   72173 system_pods.go:61] "coredns-7c65d6cfc9-r8x9s" [5a00095c-8777-412a-a7af-319a03d6153e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 15:01:46.142647   72173 system_pods.go:61] "etcd-embed-certs-989166" [981d2f54-f128-4527-a7cb-a6b9c647740b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 15:01:46.142658   72173 system_pods.go:61] "kube-apiserver-embed-certs-989166" [31780b5a-6ebf-4c75-bd27-64a95193827f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 15:01:46.142668   72173 system_pods.go:61] "kube-controller-manager-embed-certs-989166" [345e7656-579a-4be9-bcf0-4117880a2988] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 15:01:46.142678   72173 system_pods.go:61] "kube-proxy-7p84k" [5d8243a8-7247-490f-9102-61008a614a67] Running
	I1014 15:01:46.142685   72173 system_pods.go:61] "kube-scheduler-embed-certs-989166" [53b4b4a4-74ec-485e-99e3-b53c2edc80ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 15:01:46.142695   72173 system_pods.go:61] "metrics-server-6867b74b74-zc8zh" [5abf22c7-d271-4c3a-8e0e-cd867142cee1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:01:46.142703   72173 system_pods.go:61] "storage-provisioner" [6860efa4-c72f-477f-b9e1-e90ddcd112b5] Running
	I1014 15:01:46.142711   72173 system_pods.go:74] duration metric: took 18.811157ms to wait for pod list to return data ...
	I1014 15:01:46.142722   72173 node_conditions.go:102] verifying NodePressure condition ...
	I1014 15:01:46.154420   72173 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 15:01:46.154449   72173 node_conditions.go:123] node cpu capacity is 2
	I1014 15:01:46.154463   72173 node_conditions.go:105] duration metric: took 11.735142ms to run NodePressure ...
	I1014 15:01:46.154483   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:46.417106   72173 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1014 15:01:46.422102   72173 kubeadm.go:739] kubelet initialised
	I1014 15:01:46.422127   72173 kubeadm.go:740] duration metric: took 4.991248ms waiting for restarted kubelet to initialise ...
	I1014 15:01:46.422135   72173 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:01:46.428014   72173 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-r8x9s" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:46.432946   72173 pod_ready.go:98] node "embed-certs-989166" hosting pod "coredns-7c65d6cfc9-r8x9s" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.432965   72173 pod_ready.go:82] duration metric: took 4.927935ms for pod "coredns-7c65d6cfc9-r8x9s" in "kube-system" namespace to be "Ready" ...
	E1014 15:01:46.432972   72173 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-989166" hosting pod "coredns-7c65d6cfc9-r8x9s" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.432979   72173 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:46.441849   72173 pod_ready.go:98] node "embed-certs-989166" hosting pod "etcd-embed-certs-989166" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.441868   72173 pod_ready.go:82] duration metric: took 8.882863ms for pod "etcd-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	E1014 15:01:46.441877   72173 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-989166" hosting pod "etcd-embed-certs-989166" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.441883   72173 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:46.446863   72173 pod_ready.go:98] node "embed-certs-989166" hosting pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.446891   72173 pod_ready.go:82] duration metric: took 4.997658ms for pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	E1014 15:01:46.446912   72173 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-989166" hosting pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.446922   72173 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:46.526949   72173 pod_ready.go:98] node "embed-certs-989166" hosting pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.526972   72173 pod_ready.go:82] duration metric: took 80.035898ms for pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	E1014 15:01:46.526981   72173 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-989166" hosting pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.526987   72173 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-7p84k" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:46.927217   72173 pod_ready.go:93] pod "kube-proxy-7p84k" in "kube-system" namespace has status "Ready":"True"
	I1014 15:01:46.927249   72173 pod_ready.go:82] duration metric: took 400.252417ms for pod "kube-proxy-7p84k" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:46.927263   72173 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:48.933034   72173 pod_ready.go:103] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"False"
	I1014 15:01:47.558671   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:47.559112   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:47.559143   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:47.559067   73405 retry.go:31] will retry after 3.421253013s: waiting for machine to come up
	I1014 15:01:50.981602   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:50.982143   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has current primary IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:50.982167   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Found IP for machine: 192.168.50.128
	I1014 15:01:50.982186   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Reserving static IP address...
	I1014 15:01:50.982682   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-201291", mac: "52:54:00:23:03:c4", ip: "192.168.50.128"} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:50.982703   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Reserved static IP address: 192.168.50.128
	I1014 15:01:50.982722   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | skip adding static IP to network mk-default-k8s-diff-port-201291 - found existing host DHCP lease matching {name: "default-k8s-diff-port-201291", mac: "52:54:00:23:03:c4", ip: "192.168.50.128"}
	I1014 15:01:50.982743   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Getting to WaitForSSH function...
	I1014 15:01:50.982781   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for SSH to be available...
	I1014 15:01:50.985084   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:50.985609   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:50.985640   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:50.985750   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Using SSH client type: external
	I1014 15:01:50.985778   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa (-rw-------)
	I1014 15:01:50.985814   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.128 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 15:01:50.985832   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | About to run SSH command:
	I1014 15:01:50.985849   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | exit 0
	I1014 15:01:51.123927   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | SSH cmd err, output: <nil>: 
	I1014 15:01:51.124457   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetConfigRaw
	I1014 15:01:51.125106   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetIP
	I1014 15:01:51.128286   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.128716   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.128770   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.129045   72390 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/config.json ...
	I1014 15:01:51.129283   72390 machine.go:93] provisionDockerMachine start ...
	I1014 15:01:51.129308   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:51.129551   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.131756   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.132164   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.132207   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.132488   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:51.132701   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.132873   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.133022   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:51.133181   72390 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:51.133421   72390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I1014 15:01:51.133436   72390 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 15:01:51.244659   72390 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 15:01:51.244691   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetMachineName
	I1014 15:01:51.244923   72390 buildroot.go:166] provisioning hostname "default-k8s-diff-port-201291"
	I1014 15:01:51.244953   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetMachineName
	I1014 15:01:51.245149   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.248061   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.248429   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.248463   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.248521   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:51.248697   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.248887   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.249034   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:51.249227   72390 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:51.249448   72390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I1014 15:01:51.249463   72390 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-201291 && echo "default-k8s-diff-port-201291" | sudo tee /etc/hostname
	I1014 15:01:51.373260   72390 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-201291
	
	I1014 15:01:51.373293   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.376195   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.376528   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.376549   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.376752   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:51.376962   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.377159   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.377296   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:51.377446   72390 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:51.377657   72390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I1014 15:01:51.377676   72390 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-201291' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-201291/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-201291' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 15:01:52.179441   72639 start.go:364] duration metric: took 3m34.072351032s to acquireMachinesLock for "old-k8s-version-399767"
	I1014 15:01:52.179497   72639 start.go:96] Skipping create...Using existing machine configuration
	I1014 15:01:52.179505   72639 fix.go:54] fixHost starting: 
	I1014 15:01:52.179834   72639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:01:52.179873   72639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:01:52.196724   72639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39389
	I1014 15:01:52.197171   72639 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:01:52.197649   72639 main.go:141] libmachine: Using API Version  1
	I1014 15:01:52.197673   72639 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:01:52.198010   72639 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:01:52.198191   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:01:52.198337   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetState
	I1014 15:01:52.199789   72639 fix.go:112] recreateIfNeeded on old-k8s-version-399767: state=Stopped err=<nil>
	I1014 15:01:52.199826   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	W1014 15:01:52.199998   72639 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 15:01:52.202220   72639 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-399767" ...
	I1014 15:01:52.203601   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .Start
	I1014 15:01:52.203771   72639 main.go:141] libmachine: (old-k8s-version-399767) Ensuring networks are active...
	I1014 15:01:52.204575   72639 main.go:141] libmachine: (old-k8s-version-399767) Ensuring network default is active
	I1014 15:01:52.204971   72639 main.go:141] libmachine: (old-k8s-version-399767) Ensuring network mk-old-k8s-version-399767 is active
	I1014 15:01:52.205326   72639 main.go:141] libmachine: (old-k8s-version-399767) Getting domain xml...
	I1014 15:01:52.206026   72639 main.go:141] libmachine: (old-k8s-version-399767) Creating domain...
	I1014 15:01:51.488446   72390 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 15:01:51.488486   72390 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 15:01:51.488535   72390 buildroot.go:174] setting up certificates
	I1014 15:01:51.488553   72390 provision.go:84] configureAuth start
	I1014 15:01:51.488570   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetMachineName
	I1014 15:01:51.488867   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetIP
	I1014 15:01:51.491749   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.492141   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.492171   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.492351   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.494197   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.494498   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.494524   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.494693   72390 provision.go:143] copyHostCerts
	I1014 15:01:51.494745   72390 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 15:01:51.494764   72390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 15:01:51.494834   72390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 15:01:51.494945   72390 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 15:01:51.494958   72390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 15:01:51.494992   72390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 15:01:51.495081   72390 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 15:01:51.495095   72390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 15:01:51.495122   72390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 15:01:51.495214   72390 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-201291 san=[127.0.0.1 192.168.50.128 default-k8s-diff-port-201291 localhost minikube]
	I1014 15:01:51.567041   72390 provision.go:177] copyRemoteCerts
	I1014 15:01:51.567098   72390 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 15:01:51.567121   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.570006   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.570340   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.570368   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.570562   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:51.570769   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.570941   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:51.571047   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:01:51.652956   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 15:01:51.677959   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1014 15:01:51.702009   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 15:01:51.727016   72390 provision.go:87] duration metric: took 238.449189ms to configureAuth
	I1014 15:01:51.727043   72390 buildroot.go:189] setting minikube options for container-runtime
	I1014 15:01:51.727207   72390 config.go:182] Loaded profile config "default-k8s-diff-port-201291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 15:01:51.727276   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.729742   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.730043   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.730065   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.730242   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:51.730418   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.730578   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.730735   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:51.730891   72390 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:51.731097   72390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I1014 15:01:51.731114   72390 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 15:01:51.942847   72390 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 15:01:51.942874   72390 machine.go:96] duration metric: took 813.575194ms to provisionDockerMachine
	I1014 15:01:51.942888   72390 start.go:293] postStartSetup for "default-k8s-diff-port-201291" (driver="kvm2")
	I1014 15:01:51.942903   72390 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 15:01:51.942926   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:51.943250   72390 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 15:01:51.943283   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.946246   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.946608   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.946638   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.946799   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:51.946984   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.947165   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:51.947293   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:01:52.030124   72390 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 15:01:52.034493   72390 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 15:01:52.034525   72390 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 15:01:52.034625   72390 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 15:01:52.034740   72390 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 15:01:52.034834   72390 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 15:01:52.044919   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:01:52.068326   72390 start.go:296] duration metric: took 125.426221ms for postStartSetup
	I1014 15:01:52.068370   72390 fix.go:56] duration metric: took 19.832650283s for fixHost
	I1014 15:01:52.068394   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:52.070949   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.071362   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:52.071388   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.071588   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:52.071788   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:52.071908   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:52.072065   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:52.072231   72390 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:52.072449   72390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I1014 15:01:52.072468   72390 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 15:01:52.179264   72390 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728918112.149610573
	
	I1014 15:01:52.179291   72390 fix.go:216] guest clock: 1728918112.149610573
	I1014 15:01:52.179301   72390 fix.go:229] Guest: 2024-10-14 15:01:52.149610573 +0000 UTC Remote: 2024-10-14 15:01:52.06837553 +0000 UTC m=+235.685992564 (delta=81.235043ms)
	I1014 15:01:52.179349   72390 fix.go:200] guest clock delta is within tolerance: 81.235043ms
	I1014 15:01:52.179354   72390 start.go:83] releasing machines lock for "default-k8s-diff-port-201291", held for 19.943664398s
	I1014 15:01:52.179387   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:52.179666   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetIP
	I1014 15:01:52.182457   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.182834   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:52.182861   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.183000   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:52.183598   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:52.183784   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:52.183883   72390 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 15:01:52.183928   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:52.183993   72390 ssh_runner.go:195] Run: cat /version.json
	I1014 15:01:52.184017   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:52.186499   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.186692   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.186890   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:52.186915   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.187021   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:52.187050   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.187086   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:52.187288   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:52.187331   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:52.187479   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:52.187485   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:52.187597   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:01:52.187688   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:52.187843   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:01:52.264102   72390 ssh_runner.go:195] Run: systemctl --version
	I1014 15:01:52.291233   72390 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 15:01:52.443318   72390 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 15:01:52.450321   72390 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 15:01:52.450400   72390 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 15:01:52.467949   72390 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 15:01:52.467975   72390 start.go:495] detecting cgroup driver to use...
	I1014 15:01:52.468039   72390 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 15:01:52.485758   72390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 15:01:52.500662   72390 docker.go:217] disabling cri-docker service (if available) ...
	I1014 15:01:52.500729   72390 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 15:01:52.520846   72390 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 15:01:52.535606   72390 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 15:01:52.671062   72390 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 15:01:52.845631   72390 docker.go:233] disabling docker service ...
	I1014 15:01:52.845694   72390 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 15:01:52.867403   72390 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 15:01:52.882344   72390 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 15:01:53.020570   72390 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 15:01:53.157941   72390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 15:01:53.174989   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 15:01:53.195729   72390 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 15:01:53.195799   72390 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.207613   72390 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 15:01:53.207671   72390 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.218838   72390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.231186   72390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.247521   72390 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 15:01:53.258128   72390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.269119   72390 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.287810   72390 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.298576   72390 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 15:01:53.308114   72390 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 15:01:53.308169   72390 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 15:01:53.322207   72390 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 15:01:53.332284   72390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:01:53.483702   72390 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 15:01:53.581260   72390 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 15:01:53.581341   72390 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 15:01:53.586042   72390 start.go:563] Will wait 60s for crictl version
	I1014 15:01:53.586105   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:01:53.589931   72390 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 15:01:53.634776   72390 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 15:01:53.634864   72390 ssh_runner.go:195] Run: crio --version
	I1014 15:01:53.664242   72390 ssh_runner.go:195] Run: crio --version
	I1014 15:01:53.698374   72390 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1014 15:01:50.933590   72173 pod_ready.go:103] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"False"
	I1014 15:01:52.935445   72173 pod_ready.go:103] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"False"
	I1014 15:01:53.699730   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetIP
	I1014 15:01:53.702837   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:53.703224   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:53.703245   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:53.703528   72390 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1014 15:01:53.707720   72390 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:01:53.721953   72390 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-201291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-201291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.128 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 15:01:53.722106   72390 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 15:01:53.722165   72390 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:01:53.779083   72390 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1014 15:01:53.779139   72390 ssh_runner.go:195] Run: which lz4
	I1014 15:01:53.783197   72390 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 15:01:53.787515   72390 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 15:01:53.787549   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1014 15:01:55.277150   72390 crio.go:462] duration metric: took 1.493980352s to copy over tarball
	I1014 15:01:55.277212   72390 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 15:01:53.506315   72639 main.go:141] libmachine: (old-k8s-version-399767) Waiting to get IP...
	I1014 15:01:53.507576   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:53.508228   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:53.508297   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:53.508202   73581 retry.go:31] will retry after 220.59125ms: waiting for machine to come up
	I1014 15:01:53.730853   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:53.731286   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:53.731339   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:53.731257   73581 retry.go:31] will retry after 321.559387ms: waiting for machine to come up
	I1014 15:01:54.054891   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:54.055482   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:54.055509   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:54.055443   73581 retry.go:31] will retry after 444.912998ms: waiting for machine to come up
	I1014 15:01:54.502125   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:54.502479   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:54.502525   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:54.502462   73581 retry.go:31] will retry after 600.214254ms: waiting for machine to come up
	I1014 15:01:55.104962   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:55.105479   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:55.105504   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:55.105425   73581 retry.go:31] will retry after 686.77698ms: waiting for machine to come up
	I1014 15:01:55.794125   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:55.794825   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:55.794871   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:55.794717   73581 retry.go:31] will retry after 926.146146ms: waiting for machine to come up
	I1014 15:01:56.722712   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:56.723153   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:56.723183   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:56.723112   73581 retry.go:31] will retry after 1.108272037s: waiting for machine to come up
	I1014 15:01:57.832729   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:57.833304   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:57.833356   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:57.833279   73581 retry.go:31] will retry after 1.442737664s: waiting for machine to come up
	I1014 15:01:55.435691   72173 pod_ready.go:103] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"False"
	I1014 15:01:57.933561   72173 pod_ready.go:103] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"False"
	I1014 15:01:57.424526   72390 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.147277316s)
	I1014 15:01:57.424559   72390 crio.go:469] duration metric: took 2.147385522s to extract the tarball
	I1014 15:01:57.424566   72390 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 15:01:57.461792   72390 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:01:57.504424   72390 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 15:01:57.504450   72390 cache_images.go:84] Images are preloaded, skipping loading
	I1014 15:01:57.504460   72390 kubeadm.go:934] updating node { 192.168.50.128 8444 v1.31.1 crio true true} ...
	I1014 15:01:57.504656   72390 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-201291 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-201291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 15:01:57.504759   72390 ssh_runner.go:195] Run: crio config
	I1014 15:01:57.555431   72390 cni.go:84] Creating CNI manager for ""
	I1014 15:01:57.555453   72390 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:01:57.555462   72390 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 15:01:57.555482   72390 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.128 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-201291 NodeName:default-k8s-diff-port-201291 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.128"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.128 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 15:01:57.555593   72390 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.128
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-201291"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.128"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.128"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 15:01:57.555652   72390 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 15:01:57.565953   72390 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 15:01:57.566025   72390 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 15:01:57.576141   72390 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1014 15:01:57.594855   72390 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 15:01:57.611249   72390 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1014 15:01:57.628363   72390 ssh_runner.go:195] Run: grep 192.168.50.128	control-plane.minikube.internal$ /etc/hosts
	I1014 15:01:57.632552   72390 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.128	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:01:57.645588   72390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:01:57.769192   72390 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:01:57.787654   72390 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291 for IP: 192.168.50.128
	I1014 15:01:57.787677   72390 certs.go:194] generating shared ca certs ...
	I1014 15:01:57.787695   72390 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:01:57.787865   72390 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 15:01:57.787916   72390 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 15:01:57.787930   72390 certs.go:256] generating profile certs ...
	I1014 15:01:57.788084   72390 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/client.key
	I1014 15:01:57.788174   72390 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/apiserver.key.517dfce8
	I1014 15:01:57.788223   72390 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/proxy-client.key
	I1014 15:01:57.788371   72390 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 15:01:57.788407   72390 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 15:01:57.788417   72390 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 15:01:57.788439   72390 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 15:01:57.788460   72390 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 15:01:57.788482   72390 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 15:01:57.788521   72390 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:01:57.789141   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 15:01:57.821159   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 15:01:57.875530   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 15:01:57.902687   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 15:01:57.935658   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1014 15:01:57.961987   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 15:01:57.987107   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 15:01:58.013544   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 15:01:58.039793   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 15:01:58.071154   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 15:01:58.102574   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 15:01:58.127398   72390 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 15:01:58.144906   72390 ssh_runner.go:195] Run: openssl version
	I1014 15:01:58.150817   72390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 15:01:58.162122   72390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 15:01:58.167170   72390 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 15:01:58.167240   72390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 15:01:58.173692   72390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 15:01:58.185769   72390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 15:01:58.197045   72390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:01:58.201652   72390 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:01:58.201716   72390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:01:58.207559   72390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 15:01:58.218921   72390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 15:01:58.230822   72390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 15:01:58.235774   72390 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 15:01:58.235832   72390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 15:01:58.241546   72390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 15:01:58.252618   72390 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 15:01:58.257509   72390 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 15:01:58.263891   72390 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 15:01:58.270085   72390 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 15:01:58.276427   72390 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 15:01:58.282346   72390 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 15:01:58.288396   72390 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 15:01:58.294386   72390 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-201291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-201291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.128 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 15:01:58.294472   72390 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 15:01:58.294517   72390 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:01:58.342008   72390 cri.go:89] found id: ""
	I1014 15:01:58.342088   72390 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 15:01:58.352478   72390 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1014 15:01:58.352512   72390 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1014 15:01:58.352566   72390 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 15:01:58.363158   72390 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 15:01:58.364106   72390 kubeconfig.go:125] found "default-k8s-diff-port-201291" server: "https://192.168.50.128:8444"
	I1014 15:01:58.366079   72390 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 15:01:58.375635   72390 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.128
	I1014 15:01:58.375666   72390 kubeadm.go:1160] stopping kube-system containers ...
	I1014 15:01:58.375680   72390 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1014 15:01:58.375733   72390 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:01:58.411846   72390 cri.go:89] found id: ""
	I1014 15:01:58.411923   72390 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1014 15:01:58.428602   72390 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:01:58.439214   72390 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:01:58.439239   72390 kubeadm.go:157] found existing configuration files:
	
	I1014 15:01:58.439293   72390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1014 15:01:58.448475   72390 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:01:58.448528   72390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:01:58.457816   72390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1014 15:01:58.467279   72390 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:01:58.467352   72390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:01:58.477479   72390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1014 15:01:58.487899   72390 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:01:58.487968   72390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:01:58.498296   72390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1014 15:01:58.507910   72390 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:01:58.507977   72390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:01:58.517901   72390 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:01:58.527983   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:58.654226   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:59.576099   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:59.790552   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:59.879043   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:59.963369   72390 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:01:59.963462   72390 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:00.464403   72390 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:00.963891   72390 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:00.994849   72390 api_server.go:72] duration metric: took 1.031477803s to wait for apiserver process to appear ...
	I1014 15:02:00.994875   72390 api_server.go:88] waiting for apiserver healthz status ...
	I1014 15:02:00.994897   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:01:59.278031   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:59.278558   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:59.278586   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:59.278519   73581 retry.go:31] will retry after 1.187069828s: waiting for machine to come up
	I1014 15:02:00.467810   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:00.468237   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:02:00.468267   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:02:00.468195   73581 retry.go:31] will retry after 1.667312665s: waiting for machine to come up
	I1014 15:02:02.137067   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:02.137569   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:02:02.137590   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:02:02.137530   73581 retry.go:31] will retry after 1.910892221s: waiting for machine to come up
	I1014 15:01:59.994818   72173 pod_ready.go:103] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:00.130085   72173 pod_ready.go:93] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:00.130109   72173 pod_ready.go:82] duration metric: took 13.202838085s for pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:00.130121   72173 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:02.142821   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:03.649728   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 15:02:03.649764   72390 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 15:02:03.649780   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:02:03.754772   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:03.754805   72390 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:02:03.995106   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:02:04.020015   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:04.020040   72390 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:02:04.495270   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:02:04.501643   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:04.501694   72390 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:02:04.995049   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:02:05.002865   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:05.002893   72390 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:02:05.495412   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:02:05.499936   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 200:
	ok
	I1014 15:02:05.506656   72390 api_server.go:141] control plane version: v1.31.1
	I1014 15:02:05.506685   72390 api_server.go:131] duration metric: took 4.511803211s to wait for apiserver health ...
	I1014 15:02:05.506694   72390 cni.go:84] Creating CNI manager for ""
	I1014 15:02:05.506700   72390 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:02:05.508420   72390 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 15:02:05.509685   72390 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 15:02:05.521314   72390 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 15:02:05.543021   72390 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 15:02:05.553508   72390 system_pods.go:59] 8 kube-system pods found
	I1014 15:02:05.553539   72390 system_pods.go:61] "coredns-7c65d6cfc9-994hx" [b0291ce4-5503-4bb1-8e36-d956b115c3ac] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 15:02:05.553548   72390 system_pods.go:61] "etcd-default-k8s-diff-port-201291" [5e359915-fb2e-46d5-a1a8-826341943fc3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 15:02:05.553555   72390 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-201291" [047bd813-aaab-428e-ab47-12932195c91f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 15:02:05.553562   72390 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-201291" [6eb0eb91-21ce-4e56-9758-fbd453b0d4df] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 15:02:05.553567   72390 system_pods.go:61] "kube-proxy-rh82t" [1dcd3c39-1bfe-40ac-a012-ea17ea1dfb6d] Running
	I1014 15:02:05.553572   72390 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-201291" [aaeefd23-6adc-4c69-acca-38e3f3172b2e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 15:02:05.553577   72390 system_pods.go:61] "metrics-server-6867b74b74-bcrqs" [508697cd-cf31-4078-8985-5c0b77966695] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:02:05.553581   72390 system_pods.go:61] "storage-provisioner" [62925b5e-ec1d-4d5b-aa70-a4fc555db52d] Running
	I1014 15:02:05.553587   72390 system_pods.go:74] duration metric: took 10.544168ms to wait for pod list to return data ...
	I1014 15:02:05.553593   72390 node_conditions.go:102] verifying NodePressure condition ...
	I1014 15:02:05.558889   72390 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 15:02:05.558917   72390 node_conditions.go:123] node cpu capacity is 2
	I1014 15:02:05.558929   72390 node_conditions.go:105] duration metric: took 5.331009ms to run NodePressure ...
	I1014 15:02:05.558948   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:05.819037   72390 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1014 15:02:05.826431   72390 kubeadm.go:739] kubelet initialised
	I1014 15:02:05.826456   72390 kubeadm.go:740] duration metric: took 7.391664ms waiting for restarted kubelet to initialise ...
	I1014 15:02:05.826463   72390 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:02:05.833547   72390 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:05.840150   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.840175   72390 pod_ready.go:82] duration metric: took 6.599969ms for pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:05.840186   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.840205   72390 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:05.850319   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.850346   72390 pod_ready.go:82] duration metric: took 10.130163ms for pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:05.850359   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.850368   72390 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:05.857192   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.857215   72390 pod_ready.go:82] duration metric: took 6.838793ms for pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:05.857228   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.857237   72390 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:05.946611   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.946646   72390 pod_ready.go:82] duration metric: took 89.397304ms for pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:05.946663   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.946674   72390 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rh82t" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:06.346368   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "kube-proxy-rh82t" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:06.346400   72390 pod_ready.go:82] duration metric: took 399.71513ms for pod "kube-proxy-rh82t" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:06.346413   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "kube-proxy-rh82t" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:06.346423   72390 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:06.746899   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:06.746928   72390 pod_ready.go:82] duration metric: took 400.494872ms for pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:06.746941   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:06.746951   72390 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:07.146147   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:07.146175   72390 pod_ready.go:82] duration metric: took 399.215075ms for pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:07.146199   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:07.146215   72390 pod_ready.go:39] duration metric: took 1.319742206s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:02:07.146237   72390 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 15:02:07.158049   72390 ops.go:34] apiserver oom_adj: -16
	I1014 15:02:07.158072   72390 kubeadm.go:597] duration metric: took 8.805549392s to restartPrimaryControlPlane
	I1014 15:02:07.158082   72390 kubeadm.go:394] duration metric: took 8.863707122s to StartCluster
	I1014 15:02:07.158102   72390 settings.go:142] acquiring lock: {Name:mk9f6f6b9dc8c3435472077cbb9091b8e648d1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:02:07.158192   72390 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 15:02:07.159622   72390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/kubeconfig: {Name:mk17dd47b52fd7a8ee17f563c35b22ef1b7788f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:02:07.159917   72390 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.128 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 15:02:07.159968   72390 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 15:02:07.160052   72390 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-201291"
	I1014 15:02:07.160074   72390 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-201291"
	W1014 15:02:07.160086   72390 addons.go:243] addon storage-provisioner should already be in state true
	I1014 15:02:07.160125   72390 host.go:66] Checking if "default-k8s-diff-port-201291" exists ...
	I1014 15:02:07.160133   72390 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-201291"
	I1014 15:02:07.160166   72390 config.go:182] Loaded profile config "default-k8s-diff-port-201291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 15:02:07.160181   72390 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-201291"
	I1014 15:02:07.160179   72390 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-201291"
	I1014 15:02:07.160228   72390 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-201291"
	W1014 15:02:07.160251   72390 addons.go:243] addon metrics-server should already be in state true
	I1014 15:02:07.160312   72390 host.go:66] Checking if "default-k8s-diff-port-201291" exists ...
	I1014 15:02:07.160472   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.160508   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.160692   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.160712   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.160729   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.160770   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.161892   72390 out.go:177] * Verifying Kubernetes components...
	I1014 15:02:07.163368   72390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:02:07.176101   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36801
	I1014 15:02:07.176351   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44737
	I1014 15:02:07.176705   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.176834   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.177272   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.177298   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.177392   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.177413   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.177600   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43091
	I1014 15:02:07.177639   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.177703   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.178070   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.178181   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.178244   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.178252   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.178285   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.178566   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.178590   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.178944   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.179107   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetState
	I1014 15:02:07.181971   72390 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-201291"
	W1014 15:02:07.181989   72390 addons.go:243] addon default-storageclass should already be in state true
	I1014 15:02:07.182024   72390 host.go:66] Checking if "default-k8s-diff-port-201291" exists ...
	I1014 15:02:07.182278   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.182322   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.194707   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36383
	I1014 15:02:07.195401   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.196015   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.196043   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.196413   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.196511   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35479
	I1014 15:02:07.196618   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetState
	I1014 15:02:07.196977   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.197479   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.197497   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.197520   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41695
	I1014 15:02:07.197848   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.197981   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.198048   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetState
	I1014 15:02:07.198544   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.198567   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.198636   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:02:07.199017   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.199817   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.199824   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:02:07.199864   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.200860   72390 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:07.201674   72390 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1014 15:02:04.050521   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:04.051060   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:02:04.051099   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:02:04.051015   73581 retry.go:31] will retry after 2.29433775s: waiting for machine to come up
	I1014 15:02:06.347519   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:06.347985   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:02:06.348004   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:02:06.347945   73581 retry.go:31] will retry after 3.499922823s: waiting for machine to come up
	I1014 15:02:07.202461   72390 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 15:02:07.202476   72390 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 15:02:07.202491   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:02:07.203259   72390 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1014 15:02:07.203275   72390 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1014 15:02:07.203292   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:02:07.205760   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:02:07.206124   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:02:07.206150   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:02:07.206375   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:02:07.206533   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:02:07.206676   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:02:07.206729   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:02:07.206858   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:02:07.207134   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:02:07.207150   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:02:07.207248   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:02:07.207455   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:02:07.207559   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:02:07.207677   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:02:07.219554   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38833
	I1014 15:02:07.220070   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.220483   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.220508   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.220842   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.221004   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetState
	I1014 15:02:07.222706   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:02:07.222961   72390 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 15:02:07.222979   72390 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 15:02:07.222997   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:02:07.225715   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:02:07.226209   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:02:07.226250   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:02:07.226551   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:02:07.226964   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:02:07.227118   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:02:07.227254   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:02:07.362105   72390 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:02:07.384279   72390 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-201291" to be "Ready" ...
	I1014 15:02:07.438536   72390 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 15:02:07.551868   72390 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1014 15:02:07.551897   72390 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1014 15:02:07.606347   72390 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 15:02:07.656287   72390 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1014 15:02:07.656313   72390 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1014 15:02:07.687002   72390 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 15:02:07.687027   72390 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1014 15:02:07.751715   72390 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 15:02:07.810869   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:07.810902   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:07.811193   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Closing plugin on server side
	I1014 15:02:07.811247   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:07.811262   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:07.811273   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:07.811281   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:07.811546   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:07.811562   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:07.811576   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Closing plugin on server side
	I1014 15:02:07.819897   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:07.819917   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:07.820156   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:07.820206   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:07.820179   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Closing plugin on server side
	I1014 15:02:08.581553   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:08.581583   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:08.581902   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Closing plugin on server side
	I1014 15:02:08.581943   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:08.581955   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:08.581974   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:08.581986   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:08.582197   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:08.582211   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:08.595214   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:08.595242   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:08.595493   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Closing plugin on server side
	I1014 15:02:08.595569   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:08.595589   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:08.595609   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:08.595623   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:08.595833   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:08.595847   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:08.595864   72390 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-201291"
	I1014 15:02:08.597967   72390 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1014 15:02:04.638029   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:07.139428   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:11.248505   71679 start.go:364] duration metric: took 53.170862497s to acquireMachinesLock for "no-preload-813300"
	I1014 15:02:11.248567   71679 start.go:96] Skipping create...Using existing machine configuration
	I1014 15:02:11.248581   71679 fix.go:54] fixHost starting: 
	I1014 15:02:11.248978   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:11.249022   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:11.266270   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39251
	I1014 15:02:11.266780   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:11.267302   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:02:11.267319   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:11.267675   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:11.267842   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:11.267984   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetState
	I1014 15:02:11.269459   71679 fix.go:112] recreateIfNeeded on no-preload-813300: state=Stopped err=<nil>
	I1014 15:02:11.269484   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	W1014 15:02:11.269589   71679 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 15:02:11.271434   71679 out.go:177] * Restarting existing kvm2 VM for "no-preload-813300" ...
	I1014 15:02:08.599138   72390 addons.go:510] duration metric: took 1.439175047s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1014 15:02:09.388573   72390 node_ready.go:53] node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:09.851017   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.851562   72639 main.go:141] libmachine: (old-k8s-version-399767) Found IP for machine: 192.168.72.138
	I1014 15:02:09.851582   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has current primary IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.851587   72639 main.go:141] libmachine: (old-k8s-version-399767) Reserving static IP address...
	I1014 15:02:09.851961   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "old-k8s-version-399767", mac: "52:54:00:87:01:70", ip: "192.168.72.138"} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:09.851991   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | skip adding static IP to network mk-old-k8s-version-399767 - found existing host DHCP lease matching {name: "old-k8s-version-399767", mac: "52:54:00:87:01:70", ip: "192.168.72.138"}
	I1014 15:02:09.852009   72639 main.go:141] libmachine: (old-k8s-version-399767) Reserved static IP address: 192.168.72.138
	I1014 15:02:09.852021   72639 main.go:141] libmachine: (old-k8s-version-399767) Waiting for SSH to be available...
	I1014 15:02:09.852031   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | Getting to WaitForSSH function...
	I1014 15:02:09.854039   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.854351   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:09.854378   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.854493   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | Using SSH client type: external
	I1014 15:02:09.854517   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa (-rw-------)
	I1014 15:02:09.854547   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.138 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 15:02:09.854559   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | About to run SSH command:
	I1014 15:02:09.854572   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | exit 0
	I1014 15:02:09.979174   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | SSH cmd err, output: <nil>: 
	I1014 15:02:09.979594   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetConfigRaw
	I1014 15:02:09.980252   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetIP
	I1014 15:02:09.983038   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.983469   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:09.983502   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.983891   72639 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/config.json ...
	I1014 15:02:09.984191   72639 machine.go:93] provisionDockerMachine start ...
	I1014 15:02:09.984220   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:09.984487   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:09.986947   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.987361   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:09.987389   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.987514   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:09.987682   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:09.987830   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:09.987924   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:09.988076   72639 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:09.988338   72639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1014 15:02:09.988352   72639 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 15:02:10.098944   72639 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 15:02:10.098968   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetMachineName
	I1014 15:02:10.099242   72639 buildroot.go:166] provisioning hostname "old-k8s-version-399767"
	I1014 15:02:10.099268   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetMachineName
	I1014 15:02:10.099437   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:10.101961   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.102298   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.102320   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.102468   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:10.102670   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.102846   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.102980   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:10.103124   72639 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:10.103337   72639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1014 15:02:10.103353   72639 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-399767 && echo "old-k8s-version-399767" | sudo tee /etc/hostname
	I1014 15:02:10.226037   72639 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-399767
	
	I1014 15:02:10.226069   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:10.228712   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.229059   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.229082   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.229228   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:10.229408   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.229549   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.229670   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:10.229804   72639 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:10.230001   72639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1014 15:02:10.230018   72639 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-399767' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-399767/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-399767' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 15:02:10.344175   72639 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 15:02:10.344206   72639 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 15:02:10.344270   72639 buildroot.go:174] setting up certificates
	I1014 15:02:10.344284   72639 provision.go:84] configureAuth start
	I1014 15:02:10.344302   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetMachineName
	I1014 15:02:10.344632   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetIP
	I1014 15:02:10.347200   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.347587   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.347623   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.347812   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:10.349962   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.350332   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.350364   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.350502   72639 provision.go:143] copyHostCerts
	I1014 15:02:10.350558   72639 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 15:02:10.350574   72639 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 15:02:10.350646   72639 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 15:02:10.350734   72639 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 15:02:10.350742   72639 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 15:02:10.350762   72639 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 15:02:10.350812   72639 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 15:02:10.350819   72639 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 15:02:10.350837   72639 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 15:02:10.350887   72639 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-399767 san=[127.0.0.1 192.168.72.138 localhost minikube old-k8s-version-399767]
	I1014 15:02:10.602118   72639 provision.go:177] copyRemoteCerts
	I1014 15:02:10.602175   72639 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 15:02:10.602199   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:10.604519   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.604744   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.604776   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.604946   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:10.605127   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.605273   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:10.605403   72639 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa Username:docker}
	I1014 15:02:10.689081   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 15:02:10.713512   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1014 15:02:10.738086   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 15:02:10.762274   72639 provision.go:87] duration metric: took 417.977128ms to configureAuth
	I1014 15:02:10.762307   72639 buildroot.go:189] setting minikube options for container-runtime
	I1014 15:02:10.762486   72639 config.go:182] Loaded profile config "old-k8s-version-399767": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1014 15:02:10.762552   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:10.765134   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.765442   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.765469   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.765600   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:10.765756   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.765903   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.765998   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:10.766131   72639 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:10.766297   72639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1014 15:02:10.766311   72639 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 15:02:11.011252   72639 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 15:02:11.011279   72639 machine.go:96] duration metric: took 1.027069423s to provisionDockerMachine
	I1014 15:02:11.011292   72639 start.go:293] postStartSetup for "old-k8s-version-399767" (driver="kvm2")
	I1014 15:02:11.011304   72639 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 15:02:11.011349   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:11.011716   72639 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 15:02:11.011751   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:11.014418   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.014754   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:11.014790   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.014946   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:11.015125   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:11.015260   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:11.015376   72639 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa Username:docker}
	I1014 15:02:11.097883   72639 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 15:02:11.102452   72639 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 15:02:11.102481   72639 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 15:02:11.102551   72639 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 15:02:11.102687   72639 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 15:02:11.102781   72639 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 15:02:11.112774   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:02:11.138211   72639 start.go:296] duration metric: took 126.906035ms for postStartSetup
	I1014 15:02:11.138247   72639 fix.go:56] duration metric: took 18.958741429s for fixHost
	I1014 15:02:11.138270   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:11.140740   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.141100   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:11.141139   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.141280   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:11.141484   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:11.141668   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:11.141811   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:11.141974   72639 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:11.142131   72639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1014 15:02:11.142141   72639 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 15:02:11.248330   72639 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728918131.224010283
	
	I1014 15:02:11.248355   72639 fix.go:216] guest clock: 1728918131.224010283
	I1014 15:02:11.248373   72639 fix.go:229] Guest: 2024-10-14 15:02:11.224010283 +0000 UTC Remote: 2024-10-14 15:02:11.138252894 +0000 UTC m=+233.173555624 (delta=85.757389ms)
	I1014 15:02:11.248399   72639 fix.go:200] guest clock delta is within tolerance: 85.757389ms
	I1014 15:02:11.248406   72639 start.go:83] releasing machines lock for "old-k8s-version-399767", held for 19.068928968s
	I1014 15:02:11.248434   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:11.248692   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetIP
	I1014 15:02:11.251774   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.252134   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:11.252176   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.252358   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:11.252840   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:11.253017   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:11.253104   72639 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 15:02:11.253150   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:11.253232   72639 ssh_runner.go:195] Run: cat /version.json
	I1014 15:02:11.253259   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:11.256105   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.256339   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.256504   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:11.256529   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.256662   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:11.256732   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:11.256771   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.256844   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:11.256932   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:11.257003   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:11.257141   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:11.257131   72639 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa Username:docker}
	I1014 15:02:11.257296   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:11.257414   72639 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa Username:docker}
	I1014 15:02:11.363838   72639 ssh_runner.go:195] Run: systemctl --version
	I1014 15:02:11.370414   72639 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 15:02:11.521232   72639 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 15:02:11.527623   72639 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 15:02:11.527712   72639 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 15:02:11.544532   72639 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 15:02:11.544559   72639 start.go:495] detecting cgroup driver to use...
	I1014 15:02:11.544614   72639 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 15:02:11.561693   72639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 15:02:11.576555   72639 docker.go:217] disabling cri-docker service (if available) ...
	I1014 15:02:11.576622   72639 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 15:02:11.593830   72639 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 15:02:11.608785   72639 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 15:02:11.731034   72639 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 15:02:11.909278   72639 docker.go:233] disabling docker service ...
	I1014 15:02:11.909359   72639 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 15:02:11.931218   72639 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 15:02:11.951710   72639 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 15:02:12.103012   72639 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 15:02:12.252290   72639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 15:02:12.270497   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 15:02:12.293240   72639 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1014 15:02:12.293297   72639 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:12.304881   72639 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 15:02:12.304958   72639 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:12.316294   72639 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:12.328591   72639 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:12.340085   72639 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 15:02:12.351765   72639 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 15:02:12.362454   72639 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 15:02:12.362525   72639 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 15:02:12.376865   72639 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 15:02:12.387779   72639 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:02:12.528541   72639 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 15:02:12.635262   72639 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 15:02:12.635335   72639 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 15:02:12.641070   72639 start.go:563] Will wait 60s for crictl version
	I1014 15:02:12.641121   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:12.645111   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 15:02:12.691103   72639 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 15:02:12.691199   72639 ssh_runner.go:195] Run: crio --version
	I1014 15:02:12.720182   72639 ssh_runner.go:195] Run: crio --version
	I1014 15:02:12.754856   72639 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1014 15:02:12.756005   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetIP
	I1014 15:02:12.759369   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:12.759890   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:12.759924   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:12.760164   72639 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1014 15:02:12.765342   72639 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:02:12.782182   72639 kubeadm.go:883] updating cluster {Name:old-k8s-version-399767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-399767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.138 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 15:02:12.782307   72639 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1014 15:02:12.782374   72639 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:02:12.841797   72639 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1014 15:02:12.841871   72639 ssh_runner.go:195] Run: which lz4
	I1014 15:02:12.846193   72639 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 15:02:12.850982   72639 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 15:02:12.851019   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1014 15:02:09.636366   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:11.637804   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:13.638684   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:11.272626   71679 main.go:141] libmachine: (no-preload-813300) Calling .Start
	I1014 15:02:11.272827   71679 main.go:141] libmachine: (no-preload-813300) Ensuring networks are active...
	I1014 15:02:11.273510   71679 main.go:141] libmachine: (no-preload-813300) Ensuring network default is active
	I1014 15:02:11.273954   71679 main.go:141] libmachine: (no-preload-813300) Ensuring network mk-no-preload-813300 is active
	I1014 15:02:11.274410   71679 main.go:141] libmachine: (no-preload-813300) Getting domain xml...
	I1014 15:02:11.275263   71679 main.go:141] libmachine: (no-preload-813300) Creating domain...
	I1014 15:02:12.614590   71679 main.go:141] libmachine: (no-preload-813300) Waiting to get IP...
	I1014 15:02:12.615572   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:12.616018   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:12.616092   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:12.616013   73776 retry.go:31] will retry after 302.312986ms: waiting for machine to come up
	I1014 15:02:12.919678   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:12.920039   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:12.920074   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:12.920005   73776 retry.go:31] will retry after 371.392955ms: waiting for machine to come up
	I1014 15:02:13.292596   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:13.293214   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:13.293244   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:13.293164   73776 retry.go:31] will retry after 299.379251ms: waiting for machine to come up
	I1014 15:02:13.594808   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:13.595344   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:13.595370   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:13.595297   73776 retry.go:31] will retry after 598.480386ms: waiting for machine to come up
	I1014 15:02:14.195149   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:14.195744   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:14.195775   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:14.195696   73776 retry.go:31] will retry after 567.581822ms: waiting for machine to come up
	I1014 15:02:14.764315   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:14.764863   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:14.764886   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:14.764815   73776 retry.go:31] will retry after 587.597591ms: waiting for machine to come up
	I1014 15:02:15.353495   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:15.353948   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:15.353980   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:15.353896   73776 retry.go:31] will retry after 1.024496536s: waiting for machine to come up
	I1014 15:02:11.889135   72390 node_ready.go:53] node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:13.889200   72390 node_ready.go:49] node "default-k8s-diff-port-201291" has status "Ready":"True"
	I1014 15:02:13.889228   72390 node_ready.go:38] duration metric: took 6.504919545s for node "default-k8s-diff-port-201291" to be "Ready" ...
	I1014 15:02:13.889240   72390 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:02:13.898112   72390 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:15.907127   72390 pod_ready.go:103] pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:14.579304   72639 crio.go:462] duration metric: took 1.733147869s to copy over tarball
	I1014 15:02:14.579405   72639 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 15:02:17.644891   72639 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.06545265s)
	I1014 15:02:17.644954   72639 crio.go:469] duration metric: took 3.065620277s to extract the tarball
	I1014 15:02:17.644979   72639 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 15:02:17.688304   72639 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:02:17.727862   72639 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1014 15:02:17.727888   72639 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1014 15:02:17.727984   72639 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:17.727995   72639 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:17.728006   72639 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:17.728036   72639 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:17.727986   72639 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:17.728104   72639 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1014 15:02:17.728169   72639 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1014 15:02:17.728267   72639 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:17.729900   72639 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:17.729941   72639 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:17.729954   72639 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:17.729900   72639 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1014 15:02:17.729984   72639 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:17.729999   72639 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1014 15:02:17.729913   72639 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:17.730335   72639 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:17.889181   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:17.912728   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:17.919124   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:17.920117   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:17.934314   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1014 15:02:17.951143   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:17.956588   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1014 15:02:17.964968   72639 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1014 15:02:17.965031   72639 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:17.965066   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:16.139535   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:18.637888   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:16.379768   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:16.380165   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:16.380236   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:16.380142   73776 retry.go:31] will retry after 1.022289492s: waiting for machine to come up
	I1014 15:02:17.403892   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:17.404406   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:17.404430   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:17.404383   73776 retry.go:31] will retry after 1.277226075s: waiting for machine to come up
	I1014 15:02:18.683704   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:18.684176   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:18.684200   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:18.684126   73776 retry.go:31] will retry after 2.146714263s: waiting for machine to come up
	I1014 15:02:18.406707   72390 pod_ready.go:103] pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:20.412201   72390 pod_ready.go:103] pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:21.406229   72390 pod_ready.go:93] pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:21.406256   72390 pod_ready.go:82] duration metric: took 7.508120497s for pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.406269   72390 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.413868   72390 pod_ready.go:93] pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:21.413896   72390 pod_ready.go:82] duration metric: took 7.618897ms for pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.413910   72390 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:18.041388   72639 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1014 15:02:18.041436   72639 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:18.041489   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.041504   72639 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1014 15:02:18.041540   72639 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:18.041579   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.069534   72639 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1014 15:02:18.069582   72639 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1014 15:02:18.069631   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.069794   72639 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1014 15:02:18.069821   72639 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:18.069852   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.096492   72639 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1014 15:02:18.096536   72639 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:18.096575   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.104764   72639 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1014 15:02:18.104810   72639 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1014 15:02:18.104816   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:18.104854   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.104876   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:18.104885   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:18.104980   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:18.104984   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1014 15:02:18.105025   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:18.119784   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1014 15:02:18.213816   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:18.241644   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:18.288717   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:18.288820   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1014 15:02:18.288931   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:18.289005   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:18.295481   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1014 15:02:18.376936   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:18.393755   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:18.449717   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:18.449798   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:18.449824   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1014 15:02:18.449904   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:18.461905   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1014 15:02:18.508804   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1014 15:02:18.521502   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1014 15:02:18.612103   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1014 15:02:18.613450   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1014 15:02:18.613548   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1014 15:02:18.613625   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1014 15:02:18.613715   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1014 15:02:18.741774   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:18.888495   72639 cache_images.go:92] duration metric: took 1.16058525s to LoadCachedImages
	W1014 15:02:18.888578   72639 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I1014 15:02:18.888594   72639 kubeadm.go:934] updating node { 192.168.72.138 8443 v1.20.0 crio true true} ...
	I1014 15:02:18.888707   72639 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-399767 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.138
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-399767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 15:02:18.888791   72639 ssh_runner.go:195] Run: crio config
	I1014 15:02:18.943058   72639 cni.go:84] Creating CNI manager for ""
	I1014 15:02:18.943082   72639 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:02:18.943091   72639 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 15:02:18.943108   72639 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.138 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-399767 NodeName:old-k8s-version-399767 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.138"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.138 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1014 15:02:18.943225   72639 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.138
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-399767"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.138
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.138"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 15:02:18.943285   72639 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1014 15:02:18.956635   72639 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 15:02:18.956727   72639 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 15:02:18.970846   72639 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1014 15:02:18.992163   72639 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 15:02:19.012061   72639 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1014 15:02:19.033158   72639 ssh_runner.go:195] Run: grep 192.168.72.138	control-plane.minikube.internal$ /etc/hosts
	I1014 15:02:19.037195   72639 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.138	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:02:19.051127   72639 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:02:19.172992   72639 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:02:19.190545   72639 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767 for IP: 192.168.72.138
	I1014 15:02:19.190572   72639 certs.go:194] generating shared ca certs ...
	I1014 15:02:19.190592   72639 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:02:19.190786   72639 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 15:02:19.190843   72639 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 15:02:19.190853   72639 certs.go:256] generating profile certs ...
	I1014 15:02:19.190973   72639 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/client.key
	I1014 15:02:19.191053   72639 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/apiserver.key.c5ef93ea
	I1014 15:02:19.191108   72639 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/proxy-client.key
	I1014 15:02:19.191264   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 15:02:19.191302   72639 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 15:02:19.191314   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 15:02:19.191345   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 15:02:19.191374   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 15:02:19.191423   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 15:02:19.191477   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:02:19.192328   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 15:02:19.248981   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 15:02:19.281262   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 15:02:19.312859   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 15:02:19.351940   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1014 15:02:19.405710   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 15:02:19.441313   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 15:02:19.481774   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 15:02:19.509433   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 15:02:19.537994   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 15:02:19.564460   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 15:02:19.593632   72639 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 15:02:19.614775   72639 ssh_runner.go:195] Run: openssl version
	I1014 15:02:19.623548   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 15:02:19.636680   72639 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:02:19.642225   72639 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:02:19.642286   72639 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:02:19.648609   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 15:02:19.661130   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 15:02:19.672988   72639 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 15:02:19.678119   72639 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 15:02:19.678189   72639 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 15:02:19.684583   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 15:02:19.696685   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 15:02:19.708338   72639 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 15:02:19.713443   72639 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 15:02:19.713502   72639 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 15:02:19.719482   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 15:02:19.731720   72639 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 15:02:19.739006   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 15:02:19.747558   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 15:02:19.756399   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 15:02:19.764987   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 15:02:19.773320   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 15:02:19.781239   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 15:02:19.788638   72639 kubeadm.go:392] StartCluster: {Name:old-k8s-version-399767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-399767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.138 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 15:02:19.788753   72639 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 15:02:19.788810   72639 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:02:19.829586   72639 cri.go:89] found id: ""
	I1014 15:02:19.829641   72639 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 15:02:19.844632   72639 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1014 15:02:19.844654   72639 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1014 15:02:19.844708   72639 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 15:02:19.860547   72639 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 15:02:19.861848   72639 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-399767" does not appear in /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 15:02:19.862755   72639 kubeconfig.go:62] /home/jenkins/minikube-integration/19790-7836/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-399767" cluster setting kubeconfig missing "old-k8s-version-399767" context setting]
	I1014 15:02:19.863757   72639 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/kubeconfig: {Name:mk17dd47b52fd7a8ee17f563c35b22ef1b7788f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:02:19.927447   72639 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 15:02:19.940830   72639 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.138
	I1014 15:02:19.940919   72639 kubeadm.go:1160] stopping kube-system containers ...
	I1014 15:02:19.940947   72639 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1014 15:02:19.941009   72639 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:02:19.983689   72639 cri.go:89] found id: ""
	I1014 15:02:19.983769   72639 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1014 15:02:20.007079   72639 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:02:20.023868   72639 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:02:20.023896   72639 kubeadm.go:157] found existing configuration files:
	
	I1014 15:02:20.023971   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:02:20.038661   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:02:20.038734   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:02:20.054357   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:02:20.068771   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:02:20.068843   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:02:20.081157   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:02:20.095416   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:02:20.095483   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:02:20.109099   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:02:20.120608   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:02:20.120680   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:02:20.133217   72639 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:02:20.145896   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:20.311840   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:21.472918   72639 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.161037865s)
	I1014 15:02:21.472953   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:21.739827   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:21.833423   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:21.931874   72639 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:02:21.931987   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:22.432595   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:22.932784   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:21.138446   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:23.636836   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:20.833532   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:20.833974   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:20.834000   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:20.833930   73776 retry.go:31] will retry after 1.936414638s: waiting for machine to come up
	I1014 15:02:22.771789   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:22.772183   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:22.772206   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:22.772148   73776 retry.go:31] will retry after 2.51581517s: waiting for machine to come up
	I1014 15:02:25.290082   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:25.290491   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:25.290518   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:25.290453   73776 retry.go:31] will retry after 3.279920525s: waiting for machine to come up
	I1014 15:02:21.420355   72390 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:21.420385   72390 pod_ready.go:82] duration metric: took 6.465669ms for pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.420398   72390 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.427723   72390 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:21.427747   72390 pod_ready.go:82] duration metric: took 7.340946ms for pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.427760   72390 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rh82t" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.433500   72390 pod_ready.go:93] pod "kube-proxy-rh82t" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:21.433526   72390 pod_ready.go:82] duration metric: took 5.757064ms for pod "kube-proxy-rh82t" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.433543   72390 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.802632   72390 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:21.802660   72390 pod_ready.go:82] duration metric: took 369.107697ms for pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.802672   72390 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:23.811046   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:26.308105   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:23.432728   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:23.932296   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:24.432079   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:24.932064   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:25.432201   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:25.932119   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:26.432423   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:26.932675   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:27.432633   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:27.932380   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:25.637287   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:28.137136   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:28.572901   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:28.573383   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:28.573421   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:28.573304   73776 retry.go:31] will retry after 5.283390724s: waiting for machine to come up
	I1014 15:02:28.310800   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:30.400310   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:28.432518   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:28.932871   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:29.432350   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:29.932761   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:30.432621   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:30.932873   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:31.432716   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:31.932364   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:32.432747   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:32.933039   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:30.637300   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:33.136858   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:33.858151   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.858626   71679 main.go:141] libmachine: (no-preload-813300) Found IP for machine: 192.168.61.13
	I1014 15:02:33.858660   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has current primary IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.858670   71679 main.go:141] libmachine: (no-preload-813300) Reserving static IP address...
	I1014 15:02:33.859001   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "no-preload-813300", mac: "52:54:00:ab:86:40", ip: "192.168.61.13"} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:33.859022   71679 main.go:141] libmachine: (no-preload-813300) Reserved static IP address: 192.168.61.13
	I1014 15:02:33.859040   71679 main.go:141] libmachine: (no-preload-813300) DBG | skip adding static IP to network mk-no-preload-813300 - found existing host DHCP lease matching {name: "no-preload-813300", mac: "52:54:00:ab:86:40", ip: "192.168.61.13"}
	I1014 15:02:33.859055   71679 main.go:141] libmachine: (no-preload-813300) DBG | Getting to WaitForSSH function...
	I1014 15:02:33.859065   71679 main.go:141] libmachine: (no-preload-813300) Waiting for SSH to be available...
	I1014 15:02:33.860949   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.861245   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:33.861287   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.861398   71679 main.go:141] libmachine: (no-preload-813300) DBG | Using SSH client type: external
	I1014 15:02:33.861424   71679 main.go:141] libmachine: (no-preload-813300) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa (-rw-------)
	I1014 15:02:33.861460   71679 main.go:141] libmachine: (no-preload-813300) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.13 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 15:02:33.861476   71679 main.go:141] libmachine: (no-preload-813300) DBG | About to run SSH command:
	I1014 15:02:33.861488   71679 main.go:141] libmachine: (no-preload-813300) DBG | exit 0
	I1014 15:02:33.991450   71679 main.go:141] libmachine: (no-preload-813300) DBG | SSH cmd err, output: <nil>: 
	I1014 15:02:33.991854   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetConfigRaw
	I1014 15:02:33.992623   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetIP
	I1014 15:02:33.995514   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.995884   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:33.995908   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.996225   71679 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/config.json ...
	I1014 15:02:33.996549   71679 machine.go:93] provisionDockerMachine start ...
	I1014 15:02:33.996572   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:33.996784   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:33.999385   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.999751   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:33.999789   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.999948   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.000135   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.000312   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.000455   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.000648   71679 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:34.000874   71679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.13 22 <nil> <nil>}
	I1014 15:02:34.000890   71679 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 15:02:34.114981   71679 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 15:02:34.115014   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetMachineName
	I1014 15:02:34.115245   71679 buildroot.go:166] provisioning hostname "no-preload-813300"
	I1014 15:02:34.115272   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetMachineName
	I1014 15:02:34.115421   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.117557   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.117890   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.117929   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.118027   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.118210   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.118365   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.118524   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.118720   71679 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:34.118913   71679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.13 22 <nil> <nil>}
	I1014 15:02:34.118932   71679 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-813300 && echo "no-preload-813300" | sudo tee /etc/hostname
	I1014 15:02:34.246092   71679 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-813300
	
	I1014 15:02:34.246149   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.248672   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.249095   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.249122   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.249331   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.249505   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.249687   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.249860   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.250061   71679 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:34.250272   71679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.13 22 <nil> <nil>}
	I1014 15:02:34.250297   71679 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-813300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-813300/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-813300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 15:02:34.373470   71679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 15:02:34.373512   71679 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 15:02:34.373576   71679 buildroot.go:174] setting up certificates
	I1014 15:02:34.373594   71679 provision.go:84] configureAuth start
	I1014 15:02:34.373613   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetMachineName
	I1014 15:02:34.373903   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetIP
	I1014 15:02:34.376697   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.376986   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.377009   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.377137   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.379469   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.379813   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.379838   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.379981   71679 provision.go:143] copyHostCerts
	I1014 15:02:34.380034   71679 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 15:02:34.380050   71679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 15:02:34.380106   71679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 15:02:34.380194   71679 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 15:02:34.380201   71679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 15:02:34.380223   71679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 15:02:34.380282   71679 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 15:02:34.380288   71679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 15:02:34.380305   71679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 15:02:34.380362   71679 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.no-preload-813300 san=[127.0.0.1 192.168.61.13 localhost minikube no-preload-813300]
	I1014 15:02:34.421281   71679 provision.go:177] copyRemoteCerts
	I1014 15:02:34.421331   71679 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 15:02:34.421353   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.423903   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.424219   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.424248   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.424471   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.424665   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.424807   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.424948   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:02:34.512847   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 15:02:34.539814   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1014 15:02:34.568946   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 15:02:34.593444   71679 provision.go:87] duration metric: took 219.83393ms to configureAuth
	I1014 15:02:34.593467   71679 buildroot.go:189] setting minikube options for container-runtime
	I1014 15:02:34.593661   71679 config.go:182] Loaded profile config "no-preload-813300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 15:02:34.593744   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.596317   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.596626   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.596659   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.596819   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.597008   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.597159   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.597295   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.597433   71679 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:34.597611   71679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.13 22 <nil> <nil>}
	I1014 15:02:34.597631   71679 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 15:02:34.837224   71679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 15:02:34.837244   71679 machine.go:96] duration metric: took 840.680679ms to provisionDockerMachine
	I1014 15:02:34.837256   71679 start.go:293] postStartSetup for "no-preload-813300" (driver="kvm2")
	I1014 15:02:34.837265   71679 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 15:02:34.837281   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:34.837593   71679 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 15:02:34.837625   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.840357   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.840677   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.840702   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.840845   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.841025   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.841193   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.841363   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:02:34.930754   71679 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 15:02:34.935428   71679 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 15:02:34.935457   71679 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 15:02:34.935541   71679 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 15:02:34.935659   71679 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 15:02:34.935795   71679 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 15:02:34.946363   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:02:34.973029   71679 start.go:296] duration metric: took 135.76066ms for postStartSetup
	I1014 15:02:34.973074   71679 fix.go:56] duration metric: took 23.72449375s for fixHost
	I1014 15:02:34.973098   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.975897   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.976211   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.976237   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.976487   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.976687   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.976813   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.976923   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.977075   71679 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:34.977294   71679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.13 22 <nil> <nil>}
	I1014 15:02:34.977309   71679 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 15:02:35.091556   71679 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728918155.078304162
	
	I1014 15:02:35.091581   71679 fix.go:216] guest clock: 1728918155.078304162
	I1014 15:02:35.091590   71679 fix.go:229] Guest: 2024-10-14 15:02:35.078304162 +0000 UTC Remote: 2024-10-14 15:02:34.973079478 +0000 UTC m=+359.485826316 (delta=105.224684ms)
	I1014 15:02:35.091610   71679 fix.go:200] guest clock delta is within tolerance: 105.224684ms
	I1014 15:02:35.091616   71679 start.go:83] releasing machines lock for "no-preload-813300", held for 23.843071366s
	I1014 15:02:35.091641   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:35.091899   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetIP
	I1014 15:02:35.094383   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:35.094712   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:35.094733   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:35.094910   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:35.095353   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:35.095534   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:35.095589   71679 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 15:02:35.095658   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:35.095750   71679 ssh_runner.go:195] Run: cat /version.json
	I1014 15:02:35.095773   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:35.098288   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:35.098316   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:35.098680   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:35.098713   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:35.098743   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:35.098795   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:35.098835   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:35.099003   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:35.099186   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:35.099198   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:35.099367   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:35.099371   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:02:35.099513   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:35.099728   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:02:35.179961   71679 ssh_runner.go:195] Run: systemctl --version
	I1014 15:02:35.205523   71679 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 15:02:35.350662   71679 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 15:02:35.356870   71679 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 15:02:35.356941   71679 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 15:02:35.374967   71679 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 15:02:35.374997   71679 start.go:495] detecting cgroup driver to use...
	I1014 15:02:35.375067   71679 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 15:02:35.393194   71679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 15:02:35.408295   71679 docker.go:217] disabling cri-docker service (if available) ...
	I1014 15:02:35.408362   71679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 15:02:35.423927   71679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 15:02:35.438753   71679 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 15:02:32.809221   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:34.811962   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:35.567539   71679 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 15:02:35.702830   71679 docker.go:233] disabling docker service ...
	I1014 15:02:35.702916   71679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 15:02:35.720822   71679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 15:02:35.735403   71679 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 15:02:35.880532   71679 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 15:02:36.003343   71679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 15:02:36.018230   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 15:02:36.037065   71679 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 15:02:36.037134   71679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.047820   71679 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 15:02:36.047880   71679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.058531   71679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.069760   71679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.081047   71679 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 15:02:36.092384   71679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.103241   71679 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.121771   71679 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.132886   71679 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 15:02:36.143239   71679 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 15:02:36.143308   71679 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 15:02:36.156582   71679 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 15:02:36.165955   71679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:02:36.283857   71679 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 15:02:36.388165   71679 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 15:02:36.388243   71679 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 15:02:36.393324   71679 start.go:563] Will wait 60s for crictl version
	I1014 15:02:36.393378   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:36.397236   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 15:02:36.444749   71679 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 15:02:36.444839   71679 ssh_runner.go:195] Run: crio --version
	I1014 15:02:36.474831   71679 ssh_runner.go:195] Run: crio --version
	I1014 15:02:36.520531   71679 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1014 15:02:33.432474   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:33.932719   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:34.432581   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:34.932863   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:35.432886   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:35.932915   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:36.432852   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:36.932367   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:37.432894   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:37.933035   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:35.637235   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:38.137613   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:36.521865   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetIP
	I1014 15:02:36.524566   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:36.524956   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:36.524984   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:36.525213   71679 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1014 15:02:36.529579   71679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:02:36.542554   71679 kubeadm.go:883] updating cluster {Name:no-preload-813300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.13 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 15:02:36.542701   71679 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 15:02:36.542737   71679 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:02:36.585681   71679 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1014 15:02:36.585719   71679 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1014 15:02:36.585806   71679 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:36.585838   71679 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:36.585865   71679 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:36.585886   71679 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1014 15:02:36.585925   71679 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:36.585814   71679 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:36.585954   71679 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:36.585843   71679 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:36.587263   71679 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:36.587290   71679 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:36.587289   71679 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:36.587289   71679 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:36.587289   71679 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:36.587326   71679 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:36.587289   71679 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:36.587274   71679 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1014 15:02:36.737070   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:36.750146   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:36.750401   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:36.767605   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1014 15:02:36.775005   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:36.797223   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:36.833657   71679 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I1014 15:02:36.833708   71679 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:36.833754   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:36.833875   71679 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I1014 15:02:36.833896   71679 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:36.833929   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:36.850009   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:36.911675   71679 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1014 15:02:36.911720   71679 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:36.911779   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:36.973319   71679 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1014 15:02:36.973354   71679 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:36.973383   71679 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I1014 15:02:36.973394   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:36.973414   71679 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:36.973453   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:36.973456   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:36.973519   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:36.973619   71679 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I1014 15:02:36.973640   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:36.973644   71679 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:36.973671   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:37.044689   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:37.044739   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:37.044815   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:37.044860   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:37.044907   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:37.044947   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:37.166670   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:37.166737   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:37.166794   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:37.166908   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:37.166924   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:37.272802   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:37.272835   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:37.287078   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I1014 15:02:37.287167   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:37.287207   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1014 15:02:37.287240   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1014 15:02:37.287293   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1014 15:02:37.287320   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1014 15:02:37.287367   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1014 15:02:37.354510   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:37.354621   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1014 15:02:37.354659   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I1014 15:02:37.354676   71679 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1014 15:02:37.354700   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1014 15:02:37.354711   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1014 15:02:37.354719   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1014 15:02:37.354790   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I1014 15:02:37.354812   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I1014 15:02:37.354865   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I1014 15:02:37.532403   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:39.443614   71679 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1: (2.089069189s)
	I1014 15:02:39.443676   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I1014 15:02:39.443766   71679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.089027703s)
	I1014 15:02:39.443790   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I1014 15:02:39.443775   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1014 15:02:39.443813   71679 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1014 15:02:39.443833   71679 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.089105476s)
	I1014 15:02:39.443854   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1014 15:02:39.443861   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1014 15:02:39.443911   71679 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.089031069s)
	I1014 15:02:39.443933   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I1014 15:02:39.443986   71679 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.911557292s)
	I1014 15:02:39.444029   71679 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1014 15:02:39.444057   71679 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:39.444111   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:37.309522   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:39.809526   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:38.432551   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:38.932486   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:39.432591   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:39.932694   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:40.432065   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:40.932044   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:41.432313   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:41.933055   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:42.432453   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:42.932258   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:40.137656   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:42.637462   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:41.514958   71679 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.071133048s)
	I1014 15:02:41.514987   71679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.071109487s)
	I1014 15:02:41.515016   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1014 15:02:41.515041   71679 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1014 15:02:41.515046   71679 ssh_runner.go:235] Completed: which crictl: (2.070916553s)
	I1014 15:02:41.514994   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I1014 15:02:41.515093   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1014 15:02:41.515105   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:41.569878   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:43.401013   71679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.885889648s)
	I1014 15:02:43.401053   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I1014 15:02:43.401068   71679 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.831164682s)
	I1014 15:02:43.401082   71679 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1014 15:02:43.401131   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:43.401139   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1014 15:02:41.809862   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:43.810054   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:45.810567   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:43.432054   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:43.932139   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:44.432261   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:44.932517   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:45.432959   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:45.933103   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:46.432845   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:46.932825   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:47.432059   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:47.932745   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:44.639020   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:47.136927   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:49.137423   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:46.799144   71679 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.397987929s)
	I1014 15:02:46.799198   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1014 15:02:46.799201   71679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.398044957s)
	I1014 15:02:46.799222   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1014 15:02:46.799249   71679 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I1014 15:02:46.799295   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1014 15:02:46.799296   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I1014 15:02:46.804398   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1014 15:02:48.971377   71679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.171989764s)
	I1014 15:02:48.971409   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I1014 15:02:48.971436   71679 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1014 15:02:48.971481   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1014 15:02:48.309980   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:50.311361   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:48.432869   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:48.932514   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:49.432754   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:49.932514   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:50.432199   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:50.932861   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:51.432404   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:51.932097   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:52.432569   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:52.933078   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:51.141481   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:53.638306   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:50.935341   71679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.963834471s)
	I1014 15:02:50.935373   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I1014 15:02:50.935401   71679 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1014 15:02:50.935452   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1014 15:02:51.683211   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1014 15:02:51.683268   71679 cache_images.go:123] Successfully loaded all cached images
	I1014 15:02:51.683277   71679 cache_images.go:92] duration metric: took 15.097525447s to LoadCachedImages
	I1014 15:02:51.683293   71679 kubeadm.go:934] updating node { 192.168.61.13 8443 v1.31.1 crio true true} ...
	I1014 15:02:51.683441   71679 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-813300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 15:02:51.683525   71679 ssh_runner.go:195] Run: crio config
	I1014 15:02:51.737769   71679 cni.go:84] Creating CNI manager for ""
	I1014 15:02:51.737790   71679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:02:51.737799   71679 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 15:02:51.737818   71679 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.13 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-813300 NodeName:no-preload-813300 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 15:02:51.737955   71679 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-813300"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.13"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.13"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 15:02:51.738019   71679 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 15:02:51.749175   71679 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 15:02:51.749241   71679 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 15:02:51.759120   71679 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1014 15:02:51.777293   71679 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 15:02:51.795073   71679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I1014 15:02:51.815094   71679 ssh_runner.go:195] Run: grep 192.168.61.13	control-plane.minikube.internal$ /etc/hosts
	I1014 15:02:51.819087   71679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:02:51.831806   71679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:02:51.953191   71679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:02:51.972342   71679 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300 for IP: 192.168.61.13
	I1014 15:02:51.972362   71679 certs.go:194] generating shared ca certs ...
	I1014 15:02:51.972379   71679 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:02:51.972534   71679 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 15:02:51.972583   71679 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 15:02:51.972597   71679 certs.go:256] generating profile certs ...
	I1014 15:02:51.972732   71679 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/client.key
	I1014 15:02:51.972822   71679 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/apiserver.key.4d535e2d
	I1014 15:02:51.972885   71679 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/proxy-client.key
	I1014 15:02:51.973064   71679 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 15:02:51.973102   71679 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 15:02:51.973111   71679 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 15:02:51.973151   71679 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 15:02:51.973180   71679 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 15:02:51.973203   71679 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 15:02:51.973260   71679 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:02:51.974077   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 15:02:52.019451   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 15:02:52.048323   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 15:02:52.086241   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 15:02:52.129342   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 15:02:52.157243   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 15:02:52.189093   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 15:02:52.214980   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 15:02:52.241595   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 15:02:52.270329   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 15:02:52.295153   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 15:02:52.321303   71679 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 15:02:52.339181   71679 ssh_runner.go:195] Run: openssl version
	I1014 15:02:52.345152   71679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 15:02:52.357167   71679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:02:52.362387   71679 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:02:52.362442   71679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:02:52.369003   71679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 15:02:52.380917   71679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 15:02:52.392884   71679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 15:02:52.397876   71679 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 15:02:52.397942   71679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 15:02:52.404038   71679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 15:02:52.415841   71679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 15:02:52.426973   71679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 15:02:52.431848   71679 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 15:02:52.431914   71679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 15:02:52.439851   71679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 15:02:52.455014   71679 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 15:02:52.460088   71679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 15:02:52.466495   71679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 15:02:52.472659   71679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 15:02:52.483107   71679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 15:02:52.491272   71679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 15:02:52.497692   71679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 15:02:52.504352   71679 kubeadm.go:392] StartCluster: {Name:no-preload-813300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.13 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 15:02:52.504456   71679 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 15:02:52.504502   71679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:02:52.544010   71679 cri.go:89] found id: ""
	I1014 15:02:52.544074   71679 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 15:02:52.554296   71679 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1014 15:02:52.554314   71679 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1014 15:02:52.554364   71679 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 15:02:52.564193   71679 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 15:02:52.565367   71679 kubeconfig.go:125] found "no-preload-813300" server: "https://192.168.61.13:8443"
	I1014 15:02:52.567519   71679 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 15:02:52.577268   71679 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.13
	I1014 15:02:52.577296   71679 kubeadm.go:1160] stopping kube-system containers ...
	I1014 15:02:52.577305   71679 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1014 15:02:52.577343   71679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:02:52.614462   71679 cri.go:89] found id: ""
	I1014 15:02:52.614551   71679 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1014 15:02:52.631835   71679 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:02:52.642314   71679 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:02:52.642334   71679 kubeadm.go:157] found existing configuration files:
	
	I1014 15:02:52.642378   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:02:52.652036   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:02:52.652114   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:02:52.662263   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:02:52.672145   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:02:52.672214   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:02:52.682085   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:02:52.691628   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:02:52.691706   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:02:52.701314   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:02:52.711232   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:02:52.711291   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:02:52.722480   71679 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:02:52.733359   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:52.849407   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:53.647528   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:53.863718   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:53.938091   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:54.046445   71679 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:02:54.046544   71679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:54.546715   71679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:55.047285   71679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:55.062239   71679 api_server.go:72] duration metric: took 1.015804644s to wait for apiserver process to appear ...
	I1014 15:02:55.062265   71679 api_server.go:88] waiting for apiserver healthz status ...
	I1014 15:02:55.062296   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:02:55.062806   71679 api_server.go:269] stopped: https://192.168.61.13:8443/healthz: Get "https://192.168.61.13:8443/healthz": dial tcp 192.168.61.13:8443: connect: connection refused
	I1014 15:02:52.811186   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:55.309901   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:53.432335   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:53.932860   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:54.433105   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:54.933031   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:55.432058   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:55.932422   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:56.432618   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:56.932727   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:57.432265   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:57.932733   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:56.136357   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:58.136956   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:55.562748   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:02:58.274557   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 15:02:58.274587   71679 api_server.go:103] status: https://192.168.61.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 15:02:58.274625   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:02:58.296655   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 15:02:58.296682   71679 api_server.go:103] status: https://192.168.61.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 15:02:58.563094   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:02:58.567676   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:58.567717   71679 api_server.go:103] status: https://192.168.61.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:02:59.063266   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:02:59.067656   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:59.067697   71679 api_server.go:103] status: https://192.168.61.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:02:59.563300   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:02:59.569667   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:59.569699   71679 api_server.go:103] status: https://192.168.61.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:03:00.063305   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:03:00.067834   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 200:
	ok
	I1014 15:03:00.079522   71679 api_server.go:141] control plane version: v1.31.1
	I1014 15:03:00.079555   71679 api_server.go:131] duration metric: took 5.017283463s to wait for apiserver health ...
	I1014 15:03:00.079565   71679 cni.go:84] Creating CNI manager for ""
	I1014 15:03:00.079572   71679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:03:00.081793   71679 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 15:03:00.083132   71679 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 15:03:00.095329   71679 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 15:03:00.114972   71679 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 15:03:00.148816   71679 system_pods.go:59] 8 kube-system pods found
	I1014 15:03:00.148849   71679 system_pods.go:61] "coredns-7c65d6cfc9-5cft7" [43bb92da-74e8-4430-a889-3c23ed3fef67] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 15:03:00.148859   71679 system_pods.go:61] "etcd-no-preload-813300" [c3e9137c-855e-49e2-8891-8df57707f75a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 15:03:00.148867   71679 system_pods.go:61] "kube-apiserver-no-preload-813300" [683c2d48-6c84-470c-96e5-0706a1884ee7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 15:03:00.148872   71679 system_pods.go:61] "kube-controller-manager-no-preload-813300" [405991ef-9b48-4770-ba31-a213f0eae077] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 15:03:00.148882   71679 system_pods.go:61] "kube-proxy-jd4t4" [6c5c517b-855e-440c-976e-9c5e5d0710f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1014 15:03:00.148887   71679 system_pods.go:61] "kube-scheduler-no-preload-813300" [e76569e6-74c8-44dd-b283-a82072226686] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 15:03:00.148892   71679 system_pods.go:61] "metrics-server-6867b74b74-br4tl" [5b3425c6-9847-447d-a9ab-076c7cc1634f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:03:00.148896   71679 system_pods.go:61] "storage-provisioner" [2c52e790-afa9-4131-8e28-801eb3f822d5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 15:03:00.148906   71679 system_pods.go:74] duration metric: took 33.908487ms to wait for pod list to return data ...
	I1014 15:03:00.148918   71679 node_conditions.go:102] verifying NodePressure condition ...
	I1014 15:03:00.161000   71679 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 15:03:00.161029   71679 node_conditions.go:123] node cpu capacity is 2
	I1014 15:03:00.161042   71679 node_conditions.go:105] duration metric: took 12.118841ms to run NodePressure ...
	I1014 15:03:00.161067   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:03:00.510702   71679 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1014 15:03:00.515692   71679 kubeadm.go:739] kubelet initialised
	I1014 15:03:00.515715   71679 kubeadm.go:740] duration metric: took 4.986873ms waiting for restarted kubelet to initialise ...
	I1014 15:03:00.515724   71679 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:03:00.521483   71679 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-5cft7" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:57.810518   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:59.811287   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:58.432774   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:58.932666   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:59.433020   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:59.932671   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:00.432717   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:00.932917   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:01.432735   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:01.932668   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:02.432260   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:02.932075   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:00.137257   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:02.137876   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:02.528402   71679 pod_ready.go:103] pod "coredns-7c65d6cfc9-5cft7" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:04.530210   71679 pod_ready.go:93] pod "coredns-7c65d6cfc9-5cft7" in "kube-system" namespace has status "Ready":"True"
	I1014 15:03:04.530241   71679 pod_ready.go:82] duration metric: took 4.008725187s for pod "coredns-7c65d6cfc9-5cft7" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:04.530254   71679 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:02.309134   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:04.311421   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:03.432139   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:03.932241   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:04.432421   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:04.932869   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:05.432972   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:05.933010   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:06.432409   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:06.932778   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:07.432067   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:07.932749   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:04.636760   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:07.136410   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:09.137483   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:06.537318   71679 pod_ready.go:103] pod "etcd-no-preload-813300" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:09.037462   71679 pod_ready.go:103] pod "etcd-no-preload-813300" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:06.810244   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:08.810932   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:10.813334   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:08.432529   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:08.932034   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:09.432042   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:09.933054   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:10.432938   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:10.932661   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:11.432392   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:11.932068   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:12.432066   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:12.932122   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:11.636654   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:13.637819   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:10.536905   71679 pod_ready.go:93] pod "etcd-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:03:10.536932   71679 pod_ready.go:82] duration metric: took 6.006669219s for pod "etcd-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:10.536945   71679 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:12.551283   71679 pod_ready.go:103] pod "kube-apiserver-no-preload-813300" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:13.044142   71679 pod_ready.go:93] pod "kube-apiserver-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:03:13.044166   71679 pod_ready.go:82] duration metric: took 2.507213726s for pod "kube-apiserver-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.044176   71679 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.049176   71679 pod_ready.go:93] pod "kube-controller-manager-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:03:13.049196   71679 pod_ready.go:82] duration metric: took 5.01377ms for pod "kube-controller-manager-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.049206   71679 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-jd4t4" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.053623   71679 pod_ready.go:93] pod "kube-proxy-jd4t4" in "kube-system" namespace has status "Ready":"True"
	I1014 15:03:13.053646   71679 pod_ready.go:82] duration metric: took 4.434586ms for pod "kube-proxy-jd4t4" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.053654   71679 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.559610   71679 pod_ready.go:93] pod "kube-scheduler-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:03:13.559632   71679 pod_ready.go:82] duration metric: took 505.972722ms for pod "kube-scheduler-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.559642   71679 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.309520   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:15.309622   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:13.432556   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:13.932427   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:14.432053   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:14.932460   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:15.432714   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:15.933071   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:16.432567   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:16.932414   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:17.432985   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:17.932960   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:16.136599   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:18.137964   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:15.566234   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:17.567065   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:20.066221   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:17.309837   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:19.310194   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:18.433026   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:18.932015   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:19.432042   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:19.932030   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:20.433050   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:20.932658   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:21.432667   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:21.933045   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:21.933127   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:21.973476   72639 cri.go:89] found id: ""
	I1014 15:03:21.973507   72639 logs.go:282] 0 containers: []
	W1014 15:03:21.973517   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:21.973523   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:21.973584   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:22.011700   72639 cri.go:89] found id: ""
	I1014 15:03:22.011732   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.011742   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:22.011748   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:22.011814   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:22.047721   72639 cri.go:89] found id: ""
	I1014 15:03:22.047744   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.047752   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:22.047762   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:22.047814   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:22.091618   72639 cri.go:89] found id: ""
	I1014 15:03:22.091644   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.091652   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:22.091657   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:22.091706   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:22.129997   72639 cri.go:89] found id: ""
	I1014 15:03:22.130036   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.130047   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:22.130055   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:22.130114   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:22.168024   72639 cri.go:89] found id: ""
	I1014 15:03:22.168053   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.168061   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:22.168067   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:22.168136   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:22.202633   72639 cri.go:89] found id: ""
	I1014 15:03:22.202660   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.202670   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:22.202677   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:22.202739   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:22.238224   72639 cri.go:89] found id: ""
	I1014 15:03:22.238251   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.238259   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:22.238267   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:22.238278   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:22.251940   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:22.251991   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:22.379777   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:22.379799   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:22.379814   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:22.456468   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:22.456507   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:22.495404   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:22.495433   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:20.636995   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:22.637141   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:22.066371   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:24.566023   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:21.809579   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:24.309010   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:25.048061   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:25.068586   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:25.068658   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:25.121199   72639 cri.go:89] found id: ""
	I1014 15:03:25.121228   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.121237   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:25.121243   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:25.121303   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:25.174705   72639 cri.go:89] found id: ""
	I1014 15:03:25.174738   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.174749   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:25.174757   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:25.174815   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:25.236972   72639 cri.go:89] found id: ""
	I1014 15:03:25.237002   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.237013   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:25.237020   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:25.237077   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:25.276443   72639 cri.go:89] found id: ""
	I1014 15:03:25.276473   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.276483   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:25.276489   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:25.276541   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:25.314573   72639 cri.go:89] found id: ""
	I1014 15:03:25.314623   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.314636   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:25.314645   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:25.314708   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:25.357489   72639 cri.go:89] found id: ""
	I1014 15:03:25.357515   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.357525   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:25.357533   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:25.357595   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:25.397504   72639 cri.go:89] found id: ""
	I1014 15:03:25.397527   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.397538   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:25.397546   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:25.397597   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:25.433139   72639 cri.go:89] found id: ""
	I1014 15:03:25.433162   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.433170   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:25.433179   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:25.433193   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:25.448088   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:25.448121   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:25.522377   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:25.522401   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:25.522415   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:25.595505   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:25.595538   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:25.643478   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:25.643511   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:25.137557   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:27.637096   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:27.067425   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:29.565568   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:26.809419   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:29.309193   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:31.310234   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:28.195236   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:28.208612   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:28.208686   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:28.248538   72639 cri.go:89] found id: ""
	I1014 15:03:28.248569   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.248581   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:28.248588   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:28.248652   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:28.286103   72639 cri.go:89] found id: ""
	I1014 15:03:28.286131   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.286143   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:28.286149   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:28.286209   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:28.321335   72639 cri.go:89] found id: ""
	I1014 15:03:28.321371   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.321383   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:28.321391   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:28.321453   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:28.358538   72639 cri.go:89] found id: ""
	I1014 15:03:28.358571   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.358581   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:28.358588   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:28.358661   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:28.397058   72639 cri.go:89] found id: ""
	I1014 15:03:28.397087   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.397099   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:28.397106   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:28.397175   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:28.434010   72639 cri.go:89] found id: ""
	I1014 15:03:28.434032   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.434040   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:28.434045   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:28.434095   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:28.474646   72639 cri.go:89] found id: ""
	I1014 15:03:28.474672   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.474681   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:28.474687   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:28.474736   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:28.512833   72639 cri.go:89] found id: ""
	I1014 15:03:28.512860   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.512871   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:28.512882   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:28.512894   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:28.526233   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:28.526262   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:28.601366   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:28.601393   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:28.601416   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:28.690261   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:28.690300   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:28.734134   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:28.734158   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:31.290184   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:31.303493   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:31.303558   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:31.341521   72639 cri.go:89] found id: ""
	I1014 15:03:31.341552   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.341563   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:31.341569   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:31.341627   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:31.378811   72639 cri.go:89] found id: ""
	I1014 15:03:31.378839   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.378851   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:31.378859   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:31.378922   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:31.416282   72639 cri.go:89] found id: ""
	I1014 15:03:31.416310   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.416321   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:31.416328   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:31.416392   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:31.456089   72639 cri.go:89] found id: ""
	I1014 15:03:31.456123   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.456134   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:31.456142   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:31.456202   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:31.496429   72639 cri.go:89] found id: ""
	I1014 15:03:31.496468   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.496478   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:31.496485   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:31.496548   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:31.535226   72639 cri.go:89] found id: ""
	I1014 15:03:31.535248   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.535256   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:31.535262   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:31.535321   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:31.572580   72639 cri.go:89] found id: ""
	I1014 15:03:31.572608   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.572623   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:31.572631   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:31.572691   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:31.606736   72639 cri.go:89] found id: ""
	I1014 15:03:31.606759   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.606766   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:31.606774   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:31.606785   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:31.646048   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:31.646078   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:31.696818   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:31.696851   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:31.710099   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:31.710128   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:31.787756   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:31.787783   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:31.787798   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:30.136436   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:32.138037   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:34.139660   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:31.566034   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:33.567029   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:33.809434   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:36.309487   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:34.369392   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:34.383263   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:34.383344   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:34.417763   72639 cri.go:89] found id: ""
	I1014 15:03:34.417797   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.417809   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:34.417816   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:34.417890   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:34.453361   72639 cri.go:89] found id: ""
	I1014 15:03:34.453391   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.453402   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:34.453409   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:34.453488   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:34.490878   72639 cri.go:89] found id: ""
	I1014 15:03:34.490905   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.490913   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:34.490919   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:34.490980   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:34.527554   72639 cri.go:89] found id: ""
	I1014 15:03:34.527584   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.527595   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:34.527603   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:34.527655   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:34.564813   72639 cri.go:89] found id: ""
	I1014 15:03:34.564841   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.564851   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:34.564857   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:34.564903   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:34.599899   72639 cri.go:89] found id: ""
	I1014 15:03:34.599930   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.599942   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:34.599949   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:34.600019   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:34.641686   72639 cri.go:89] found id: ""
	I1014 15:03:34.641717   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.641728   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:34.641735   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:34.641794   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:34.681154   72639 cri.go:89] found id: ""
	I1014 15:03:34.681184   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.681195   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:34.681205   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:34.681218   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:34.719638   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:34.719672   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:34.771687   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:34.771722   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:34.785943   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:34.785972   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:34.861821   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:34.861861   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:34.861875   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:37.441605   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:37.456763   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:37.456828   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:37.494176   72639 cri.go:89] found id: ""
	I1014 15:03:37.494202   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.494210   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:37.494216   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:37.494268   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:37.538802   72639 cri.go:89] found id: ""
	I1014 15:03:37.538834   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.538846   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:37.538853   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:37.538913   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:37.586282   72639 cri.go:89] found id: ""
	I1014 15:03:37.586312   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.586322   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:37.586328   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:37.586397   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:37.632673   72639 cri.go:89] found id: ""
	I1014 15:03:37.632698   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.632709   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:37.632715   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:37.632771   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:37.673340   72639 cri.go:89] found id: ""
	I1014 15:03:37.673364   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.673372   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:37.673377   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:37.673427   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:37.718725   72639 cri.go:89] found id: ""
	I1014 15:03:37.718750   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.718758   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:37.718764   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:37.718807   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:37.760560   72639 cri.go:89] found id: ""
	I1014 15:03:37.760587   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.760597   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:37.760605   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:37.760665   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:37.800912   72639 cri.go:89] found id: ""
	I1014 15:03:37.800941   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.800949   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:37.800957   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:37.800968   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:37.815338   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:37.815363   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:37.893018   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:37.893050   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:37.893067   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:37.978315   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:37.978349   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:36.637635   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:39.136295   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:36.065915   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:38.066310   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:38.810020   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:40.810460   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:38.019760   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:38.019788   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:40.570918   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:40.586058   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:40.586122   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:40.623753   72639 cri.go:89] found id: ""
	I1014 15:03:40.623784   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.623795   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:40.623801   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:40.623862   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:40.663909   72639 cri.go:89] found id: ""
	I1014 15:03:40.663937   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.663946   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:40.663953   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:40.664008   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:40.698572   72639 cri.go:89] found id: ""
	I1014 15:03:40.698615   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.698626   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:40.698633   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:40.698683   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:40.734882   72639 cri.go:89] found id: ""
	I1014 15:03:40.734907   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.734914   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:40.734920   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:40.734976   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:40.768429   72639 cri.go:89] found id: ""
	I1014 15:03:40.768455   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.768462   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:40.768468   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:40.768527   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:40.803429   72639 cri.go:89] found id: ""
	I1014 15:03:40.803456   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.803466   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:40.803474   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:40.803535   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:40.842854   72639 cri.go:89] found id: ""
	I1014 15:03:40.842883   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.842905   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:40.842913   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:40.842988   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:40.879638   72639 cri.go:89] found id: ""
	I1014 15:03:40.879661   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.879669   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:40.879677   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:40.879687   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:40.924949   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:40.924983   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:40.976271   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:40.976304   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:40.991492   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:40.991520   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:41.071418   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:41.071439   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:41.071453   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:41.136877   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:43.637356   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:40.566353   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:43.065982   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:45.066405   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:43.310188   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:45.811549   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:43.652387   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:43.666239   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:43.666317   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:43.705726   72639 cri.go:89] found id: ""
	I1014 15:03:43.705752   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.705761   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:43.705766   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:43.705814   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:43.745648   72639 cri.go:89] found id: ""
	I1014 15:03:43.745672   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.745680   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:43.745685   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:43.745731   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:43.783032   72639 cri.go:89] found id: ""
	I1014 15:03:43.783055   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.783063   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:43.783068   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:43.783115   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:43.820582   72639 cri.go:89] found id: ""
	I1014 15:03:43.820607   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.820617   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:43.820623   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:43.820669   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:43.862312   72639 cri.go:89] found id: ""
	I1014 15:03:43.862338   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.862348   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:43.862353   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:43.862404   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:43.898338   72639 cri.go:89] found id: ""
	I1014 15:03:43.898368   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.898379   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:43.898388   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:43.898448   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:43.934682   72639 cri.go:89] found id: ""
	I1014 15:03:43.934709   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.934719   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:43.934726   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:43.934781   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:43.970209   72639 cri.go:89] found id: ""
	I1014 15:03:43.970237   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.970247   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:43.970257   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:43.970269   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:44.024791   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:44.024832   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:44.038431   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:44.038457   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:44.117255   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:44.117291   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:44.117308   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:44.199397   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:44.199436   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:46.739819   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:46.755553   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:46.755625   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:46.797225   72639 cri.go:89] found id: ""
	I1014 15:03:46.797253   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.797265   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:46.797272   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:46.797335   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:46.832999   72639 cri.go:89] found id: ""
	I1014 15:03:46.833025   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.833036   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:46.833043   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:46.833103   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:46.872711   72639 cri.go:89] found id: ""
	I1014 15:03:46.872733   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.872741   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:46.872746   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:46.872795   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:46.909945   72639 cri.go:89] found id: ""
	I1014 15:03:46.909968   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.909977   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:46.909985   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:46.910046   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:46.946036   72639 cri.go:89] found id: ""
	I1014 15:03:46.946067   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.946080   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:46.946087   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:46.946141   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:46.981772   72639 cri.go:89] found id: ""
	I1014 15:03:46.981806   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.981819   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:46.981828   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:46.981896   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:47.022761   72639 cri.go:89] found id: ""
	I1014 15:03:47.022790   72639 logs.go:282] 0 containers: []
	W1014 15:03:47.022800   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:47.022807   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:47.022869   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:47.057368   72639 cri.go:89] found id: ""
	I1014 15:03:47.057392   72639 logs.go:282] 0 containers: []
	W1014 15:03:47.057400   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:47.057408   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:47.057418   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:47.134369   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:47.134408   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:47.179550   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:47.179586   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:47.233317   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:47.233355   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:47.247598   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:47.247629   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:47.321309   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:45.637760   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:48.136826   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:47.067003   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:49.565410   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:48.309520   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:50.812241   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:49.821955   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:49.836907   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:49.836975   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:49.876651   72639 cri.go:89] found id: ""
	I1014 15:03:49.876682   72639 logs.go:282] 0 containers: []
	W1014 15:03:49.876694   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:49.876713   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:49.876781   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:49.913440   72639 cri.go:89] found id: ""
	I1014 15:03:49.913464   72639 logs.go:282] 0 containers: []
	W1014 15:03:49.913473   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:49.913479   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:49.913535   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:49.949352   72639 cri.go:89] found id: ""
	I1014 15:03:49.949383   72639 logs.go:282] 0 containers: []
	W1014 15:03:49.949395   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:49.949402   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:49.949463   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:49.984599   72639 cri.go:89] found id: ""
	I1014 15:03:49.984629   72639 logs.go:282] 0 containers: []
	W1014 15:03:49.984641   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:49.984649   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:49.984709   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:50.028049   72639 cri.go:89] found id: ""
	I1014 15:03:50.028072   72639 logs.go:282] 0 containers: []
	W1014 15:03:50.028083   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:50.028090   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:50.028166   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:50.062272   72639 cri.go:89] found id: ""
	I1014 15:03:50.062294   72639 logs.go:282] 0 containers: []
	W1014 15:03:50.062302   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:50.062308   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:50.062358   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:50.099722   72639 cri.go:89] found id: ""
	I1014 15:03:50.099750   72639 logs.go:282] 0 containers: []
	W1014 15:03:50.099762   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:50.099769   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:50.099830   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:50.139984   72639 cri.go:89] found id: ""
	I1014 15:03:50.140005   72639 logs.go:282] 0 containers: []
	W1014 15:03:50.140013   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:50.140020   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:50.140032   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:50.218467   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:50.218500   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:50.260600   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:50.260635   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:50.313725   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:50.313757   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:50.328431   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:50.328462   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:50.401334   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:52.901787   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:52.917836   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:52.917902   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:52.955387   72639 cri.go:89] found id: ""
	I1014 15:03:52.955418   72639 logs.go:282] 0 containers: []
	W1014 15:03:52.955431   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:52.955440   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:52.955504   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:52.990890   72639 cri.go:89] found id: ""
	I1014 15:03:52.990924   72639 logs.go:282] 0 containers: []
	W1014 15:03:52.990936   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:52.990945   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:52.991004   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:50.636581   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:53.137639   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:51.566403   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:54.066690   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:53.310174   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:55.809402   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:53.032344   72639 cri.go:89] found id: ""
	I1014 15:03:53.032374   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.032384   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:53.032390   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:53.032458   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:53.073501   72639 cri.go:89] found id: ""
	I1014 15:03:53.073527   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.073537   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:53.073544   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:53.073602   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:53.114273   72639 cri.go:89] found id: ""
	I1014 15:03:53.114307   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.114316   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:53.114334   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:53.114389   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:53.155448   72639 cri.go:89] found id: ""
	I1014 15:03:53.155475   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.155484   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:53.155490   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:53.155539   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:53.191304   72639 cri.go:89] found id: ""
	I1014 15:03:53.191338   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.191350   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:53.191357   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:53.191438   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:53.224664   72639 cri.go:89] found id: ""
	I1014 15:03:53.224691   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.224702   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:53.224727   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:53.224744   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:53.275751   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:53.275786   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:53.289275   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:53.289303   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:53.369828   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:53.369855   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:53.369871   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:53.457248   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:53.457285   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:56.003384   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:56.017722   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:56.017782   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:56.056644   72639 cri.go:89] found id: ""
	I1014 15:03:56.056675   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.056686   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:56.056694   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:56.056757   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:56.094482   72639 cri.go:89] found id: ""
	I1014 15:03:56.094507   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.094517   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:56.094524   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:56.094583   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:56.129884   72639 cri.go:89] found id: ""
	I1014 15:03:56.129913   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.129921   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:56.129926   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:56.129974   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:56.167171   72639 cri.go:89] found id: ""
	I1014 15:03:56.167198   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.167206   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:56.167211   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:56.167264   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:56.204400   72639 cri.go:89] found id: ""
	I1014 15:03:56.204433   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.204442   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:56.204447   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:56.204494   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:56.240407   72639 cri.go:89] found id: ""
	I1014 15:03:56.240437   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.240448   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:56.240456   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:56.240517   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:56.277653   72639 cri.go:89] found id: ""
	I1014 15:03:56.277679   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.277687   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:56.277693   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:56.277738   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:56.313423   72639 cri.go:89] found id: ""
	I1014 15:03:56.313451   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.313459   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:56.313468   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:56.313480   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:56.368094   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:56.368133   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:56.382563   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:56.382621   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:56.455106   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:56.455130   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:56.455144   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:56.532288   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:56.532329   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:55.636007   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:57.637196   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:56.566763   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:59.066227   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:58.309184   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:00.309370   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:59.072469   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:59.089024   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:59.089094   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:59.130798   72639 cri.go:89] found id: ""
	I1014 15:03:59.130829   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.130840   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:59.130848   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:59.130908   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:59.167828   72639 cri.go:89] found id: ""
	I1014 15:03:59.167854   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.167864   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:59.167871   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:59.167932   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:59.223482   72639 cri.go:89] found id: ""
	I1014 15:03:59.223509   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.223520   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:59.223528   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:59.223590   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:59.261186   72639 cri.go:89] found id: ""
	I1014 15:03:59.261231   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.261243   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:59.261251   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:59.261314   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:59.296924   72639 cri.go:89] found id: ""
	I1014 15:03:59.296985   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.297000   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:59.297008   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:59.297084   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:59.333891   72639 cri.go:89] found id: ""
	I1014 15:03:59.333915   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.333923   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:59.333929   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:59.333991   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:59.374106   72639 cri.go:89] found id: ""
	I1014 15:03:59.374134   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.374143   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:59.374150   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:59.374222   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:59.412256   72639 cri.go:89] found id: ""
	I1014 15:03:59.412283   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.412291   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:59.412298   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:59.412308   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:59.492869   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:59.492904   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:59.492923   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:59.576441   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:59.576473   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:59.618638   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:59.618668   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:59.671295   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:59.671331   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:02.184689   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:02.197763   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:02.197833   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:02.231709   72639 cri.go:89] found id: ""
	I1014 15:04:02.231734   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.231746   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:02.231753   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:02.231815   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:02.269259   72639 cri.go:89] found id: ""
	I1014 15:04:02.269291   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.269303   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:02.269311   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:02.269390   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:02.305926   72639 cri.go:89] found id: ""
	I1014 15:04:02.305956   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.305967   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:02.305975   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:02.306034   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:02.349516   72639 cri.go:89] found id: ""
	I1014 15:04:02.349544   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.349557   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:02.349563   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:02.349622   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:02.388334   72639 cri.go:89] found id: ""
	I1014 15:04:02.388361   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.388371   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:02.388376   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:02.388428   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:02.422742   72639 cri.go:89] found id: ""
	I1014 15:04:02.422770   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.422781   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:02.422789   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:02.422850   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:02.463686   72639 cri.go:89] found id: ""
	I1014 15:04:02.463710   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.463718   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:02.463724   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:02.463770   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:02.498352   72639 cri.go:89] found id: ""
	I1014 15:04:02.498383   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.498394   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:02.498404   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:02.498418   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:02.512531   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:02.512561   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:02.585331   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:02.585359   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:02.585373   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:02.667376   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:02.667414   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:02.708101   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:02.708133   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:00.136170   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:02.138198   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:01.566456   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:04.066934   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:02.309906   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:04.310009   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:06.310084   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:05.259839   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:05.273102   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:05.273186   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:05.311745   72639 cri.go:89] found id: ""
	I1014 15:04:05.311768   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.311776   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:05.311787   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:05.311834   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:05.349313   72639 cri.go:89] found id: ""
	I1014 15:04:05.349336   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.349344   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:05.349352   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:05.349416   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:05.388003   72639 cri.go:89] found id: ""
	I1014 15:04:05.388026   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.388034   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:05.388039   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:05.388098   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:05.426636   72639 cri.go:89] found id: ""
	I1014 15:04:05.426665   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.426676   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:05.426683   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:05.426745   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:05.461945   72639 cri.go:89] found id: ""
	I1014 15:04:05.461974   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.461983   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:05.461989   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:05.462049   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:05.497099   72639 cri.go:89] found id: ""
	I1014 15:04:05.497130   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.497142   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:05.497149   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:05.497216   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:05.531621   72639 cri.go:89] found id: ""
	I1014 15:04:05.531652   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.531664   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:05.531671   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:05.531729   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:05.568950   72639 cri.go:89] found id: ""
	I1014 15:04:05.568973   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.568983   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:05.568992   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:05.569012   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:05.624806   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:05.624846   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:05.651912   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:05.651961   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:05.740342   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:05.740369   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:05.740384   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:05.817901   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:05.817932   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:04.636643   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:07.137525   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:06.566519   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:08.567458   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:08.809718   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:10.809968   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:08.360267   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:08.373249   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:08.373325   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:08.409485   72639 cri.go:89] found id: ""
	I1014 15:04:08.409520   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.409535   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:08.409542   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:08.409604   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:08.444977   72639 cri.go:89] found id: ""
	I1014 15:04:08.445000   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.445008   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:08.445014   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:08.445061   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:08.478080   72639 cri.go:89] found id: ""
	I1014 15:04:08.478108   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.478117   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:08.478123   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:08.478169   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:08.511510   72639 cri.go:89] found id: ""
	I1014 15:04:08.511536   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.511545   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:08.511552   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:08.511603   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:08.546260   72639 cri.go:89] found id: ""
	I1014 15:04:08.546285   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.546292   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:08.546299   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:08.546347   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:08.582775   72639 cri.go:89] found id: ""
	I1014 15:04:08.582799   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.582810   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:08.582816   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:08.582875   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:08.619208   72639 cri.go:89] found id: ""
	I1014 15:04:08.619231   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.619239   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:08.619244   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:08.619299   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:08.654823   72639 cri.go:89] found id: ""
	I1014 15:04:08.654849   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.654860   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:08.654870   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:08.654885   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:08.704543   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:08.704574   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:08.718111   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:08.718144   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:08.792267   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:08.792290   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:08.792309   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:08.870178   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:08.870210   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:11.409975   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:11.432171   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:11.432243   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:11.468997   72639 cri.go:89] found id: ""
	I1014 15:04:11.469021   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.469030   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:11.469035   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:11.469094   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:11.504312   72639 cri.go:89] found id: ""
	I1014 15:04:11.504337   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.504346   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:11.504354   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:11.504417   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:11.540628   72639 cri.go:89] found id: ""
	I1014 15:04:11.540654   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.540662   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:11.540667   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:11.540729   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:11.576466   72639 cri.go:89] found id: ""
	I1014 15:04:11.576491   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.576498   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:11.576506   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:11.576550   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:11.611466   72639 cri.go:89] found id: ""
	I1014 15:04:11.611501   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.611512   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:11.611519   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:11.611578   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:11.650089   72639 cri.go:89] found id: ""
	I1014 15:04:11.650116   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.650126   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:11.650133   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:11.650191   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:11.686538   72639 cri.go:89] found id: ""
	I1014 15:04:11.686563   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.686571   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:11.686577   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:11.686654   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:11.725494   72639 cri.go:89] found id: ""
	I1014 15:04:11.725517   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.725524   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:11.725532   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:11.725545   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:11.779062   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:11.779102   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:11.792726   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:11.792753   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:11.867945   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:11.867972   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:11.867986   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:11.952299   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:11.952340   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:09.636140   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:11.636455   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:14.136183   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:10.567626   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:13.065875   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:15.066484   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:13.310523   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:15.811094   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:14.493922   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:14.506754   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:14.506817   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:14.540456   72639 cri.go:89] found id: ""
	I1014 15:04:14.540480   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.540489   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:14.540495   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:14.540545   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:14.574819   72639 cri.go:89] found id: ""
	I1014 15:04:14.574843   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.574853   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:14.574859   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:14.574917   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:14.608834   72639 cri.go:89] found id: ""
	I1014 15:04:14.608859   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.608868   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:14.608873   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:14.608920   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:14.644182   72639 cri.go:89] found id: ""
	I1014 15:04:14.644210   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.644218   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:14.644223   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:14.644283   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:14.679113   72639 cri.go:89] found id: ""
	I1014 15:04:14.679145   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.679156   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:14.679164   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:14.679228   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:14.716111   72639 cri.go:89] found id: ""
	I1014 15:04:14.716142   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.716154   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:14.716167   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:14.716220   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:14.755884   72639 cri.go:89] found id: ""
	I1014 15:04:14.755907   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.755915   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:14.755920   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:14.755968   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:14.794167   72639 cri.go:89] found id: ""
	I1014 15:04:14.794195   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.794207   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:14.794217   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:14.794234   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:14.844828   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:14.844864   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:14.859424   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:14.859451   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:14.936660   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:14.936687   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:14.936703   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:15.017034   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:15.017070   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:17.555604   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:17.570628   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:17.570687   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:17.612919   72639 cri.go:89] found id: ""
	I1014 15:04:17.612943   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.612951   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:17.612956   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:17.613002   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:17.651178   72639 cri.go:89] found id: ""
	I1014 15:04:17.651210   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.651220   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:17.651226   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:17.651278   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:17.687923   72639 cri.go:89] found id: ""
	I1014 15:04:17.687955   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.687966   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:17.687973   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:17.688024   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:17.724759   72639 cri.go:89] found id: ""
	I1014 15:04:17.724790   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.724800   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:17.724807   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:17.724866   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:17.760189   72639 cri.go:89] found id: ""
	I1014 15:04:17.760212   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.760220   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:17.760226   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:17.760274   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:17.797517   72639 cri.go:89] found id: ""
	I1014 15:04:17.797541   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.797549   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:17.797554   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:17.797601   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:17.833238   72639 cri.go:89] found id: ""
	I1014 15:04:17.833261   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.833270   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:17.833275   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:17.833321   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:17.868828   72639 cri.go:89] found id: ""
	I1014 15:04:17.868857   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.868865   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:17.868873   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:17.868883   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:17.956972   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:17.957011   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:16.137357   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:18.636865   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:17.067415   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:19.566146   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:18.310380   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:20.809526   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:18.006354   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:18.006390   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:18.056237   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:18.056271   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:18.070763   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:18.070792   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:18.147471   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:20.648238   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:20.661465   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:20.661534   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:20.695869   72639 cri.go:89] found id: ""
	I1014 15:04:20.695894   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.695902   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:20.695907   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:20.695957   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:20.729271   72639 cri.go:89] found id: ""
	I1014 15:04:20.729295   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.729313   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:20.729319   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:20.729364   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:20.767110   72639 cri.go:89] found id: ""
	I1014 15:04:20.767137   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.767147   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:20.767154   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:20.767209   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:20.802752   72639 cri.go:89] found id: ""
	I1014 15:04:20.802781   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.802791   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:20.802798   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:20.802846   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:20.841958   72639 cri.go:89] found id: ""
	I1014 15:04:20.841987   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.841998   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:20.842005   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:20.842066   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:20.878869   72639 cri.go:89] found id: ""
	I1014 15:04:20.878896   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.878907   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:20.878914   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:20.878974   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:20.913802   72639 cri.go:89] found id: ""
	I1014 15:04:20.913838   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.913852   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:20.913861   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:20.913922   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:20.948350   72639 cri.go:89] found id: ""
	I1014 15:04:20.948378   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.948395   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:20.948403   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:20.948416   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:21.001065   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:21.001098   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:21.014427   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:21.014458   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:21.091386   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:21.091412   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:21.091432   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:21.175255   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:21.175299   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:21.137358   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:23.636623   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:22.066415   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:24.066476   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:22.809925   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:25.309528   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:23.718260   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:23.732366   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:23.732445   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:23.767269   72639 cri.go:89] found id: ""
	I1014 15:04:23.767299   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.767311   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:23.767317   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:23.767379   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:23.808502   72639 cri.go:89] found id: ""
	I1014 15:04:23.808532   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.808543   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:23.808550   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:23.808606   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:23.845632   72639 cri.go:89] found id: ""
	I1014 15:04:23.845664   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.845677   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:23.845685   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:23.845753   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:23.880218   72639 cri.go:89] found id: ""
	I1014 15:04:23.880249   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.880261   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:23.880268   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:23.880332   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:23.915674   72639 cri.go:89] found id: ""
	I1014 15:04:23.915697   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.915705   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:23.915710   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:23.915767   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:23.950526   72639 cri.go:89] found id: ""
	I1014 15:04:23.950559   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.950570   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:23.950578   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:23.950656   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:23.986130   72639 cri.go:89] found id: ""
	I1014 15:04:23.986167   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.986178   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:23.986186   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:23.986246   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:24.027112   72639 cri.go:89] found id: ""
	I1014 15:04:24.027141   72639 logs.go:282] 0 containers: []
	W1014 15:04:24.027154   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:24.027165   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:24.027181   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:24.082559   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:24.082610   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:24.096900   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:24.096929   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:24.173293   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:24.173327   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:24.173341   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:24.256921   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:24.256962   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:26.802073   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:26.817307   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:26.817366   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:26.855777   72639 cri.go:89] found id: ""
	I1014 15:04:26.855805   72639 logs.go:282] 0 containers: []
	W1014 15:04:26.855817   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:26.855825   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:26.855876   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:26.892260   72639 cri.go:89] found id: ""
	I1014 15:04:26.892288   72639 logs.go:282] 0 containers: []
	W1014 15:04:26.892300   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:26.892308   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:26.892369   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:26.931066   72639 cri.go:89] found id: ""
	I1014 15:04:26.931103   72639 logs.go:282] 0 containers: []
	W1014 15:04:26.931114   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:26.931122   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:26.931174   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:26.966890   72639 cri.go:89] found id: ""
	I1014 15:04:26.966923   72639 logs.go:282] 0 containers: []
	W1014 15:04:26.966933   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:26.966941   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:26.967002   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:27.001338   72639 cri.go:89] found id: ""
	I1014 15:04:27.001368   72639 logs.go:282] 0 containers: []
	W1014 15:04:27.001379   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:27.001386   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:27.001454   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:27.041798   72639 cri.go:89] found id: ""
	I1014 15:04:27.041830   72639 logs.go:282] 0 containers: []
	W1014 15:04:27.041839   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:27.041844   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:27.041905   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:27.080248   72639 cri.go:89] found id: ""
	I1014 15:04:27.080279   72639 logs.go:282] 0 containers: []
	W1014 15:04:27.080288   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:27.080293   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:27.080341   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:27.116207   72639 cri.go:89] found id: ""
	I1014 15:04:27.116234   72639 logs.go:282] 0 containers: []
	W1014 15:04:27.116242   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:27.116250   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:27.116264   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:27.191149   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:27.191174   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:27.191203   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:27.275771   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:27.275808   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:27.323223   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:27.323254   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:27.375409   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:27.375455   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:26.137156   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:28.637895   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:26.066790   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:28.565208   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:27.810315   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:30.309211   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:29.890408   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:29.904797   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:29.904853   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:29.938655   72639 cri.go:89] found id: ""
	I1014 15:04:29.938685   72639 logs.go:282] 0 containers: []
	W1014 15:04:29.938698   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:29.938705   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:29.938765   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:29.976477   72639 cri.go:89] found id: ""
	I1014 15:04:29.976508   72639 logs.go:282] 0 containers: []
	W1014 15:04:29.976519   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:29.976526   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:29.976583   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:30.014813   72639 cri.go:89] found id: ""
	I1014 15:04:30.014842   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.014853   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:30.014860   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:30.014926   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:30.050804   72639 cri.go:89] found id: ""
	I1014 15:04:30.050833   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.050844   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:30.050854   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:30.050918   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:30.087921   72639 cri.go:89] found id: ""
	I1014 15:04:30.087946   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.087954   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:30.087959   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:30.088016   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:30.125411   72639 cri.go:89] found id: ""
	I1014 15:04:30.125446   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.125458   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:30.125465   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:30.125519   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:30.162067   72639 cri.go:89] found id: ""
	I1014 15:04:30.162099   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.162110   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:30.162118   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:30.162181   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:30.200376   72639 cri.go:89] found id: ""
	I1014 15:04:30.200406   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.200418   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:30.200435   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:30.200451   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:30.279965   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:30.279992   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:30.280007   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:30.364866   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:30.364900   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:30.408808   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:30.408842   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:30.464473   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:30.464507   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:32.980254   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:32.994254   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:32.994320   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:31.136531   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:33.137201   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:30.566228   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:32.567393   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:35.065955   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:32.810349   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:35.308794   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:33.035996   72639 cri.go:89] found id: ""
	I1014 15:04:33.036025   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.036036   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:33.036043   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:33.036103   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:33.077494   72639 cri.go:89] found id: ""
	I1014 15:04:33.077522   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.077531   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:33.077538   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:33.077585   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:33.112666   72639 cri.go:89] found id: ""
	I1014 15:04:33.112695   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.112705   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:33.112711   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:33.112772   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:33.150229   72639 cri.go:89] found id: ""
	I1014 15:04:33.150266   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.150276   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:33.150282   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:33.150336   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:33.186960   72639 cri.go:89] found id: ""
	I1014 15:04:33.186989   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.187001   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:33.187008   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:33.187062   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:33.223596   72639 cri.go:89] found id: ""
	I1014 15:04:33.223631   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.223641   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:33.223647   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:33.223711   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:33.260137   72639 cri.go:89] found id: ""
	I1014 15:04:33.260162   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.260170   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:33.260175   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:33.260228   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:33.298072   72639 cri.go:89] found id: ""
	I1014 15:04:33.298095   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.298103   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:33.298110   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:33.298121   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:33.379587   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:33.379623   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:33.423427   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:33.423456   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:33.474644   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:33.474683   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:33.488324   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:33.488354   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:33.556257   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:36.056955   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:36.072461   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:36.072536   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:36.109467   72639 cri.go:89] found id: ""
	I1014 15:04:36.109493   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.109502   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:36.109509   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:36.109561   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:36.147985   72639 cri.go:89] found id: ""
	I1014 15:04:36.148012   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.148020   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:36.148025   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:36.148071   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:36.183885   72639 cri.go:89] found id: ""
	I1014 15:04:36.183906   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.183914   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:36.183919   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:36.183968   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:36.220994   72639 cri.go:89] found id: ""
	I1014 15:04:36.221025   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.221036   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:36.221044   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:36.221108   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:36.256586   72639 cri.go:89] found id: ""
	I1014 15:04:36.256612   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.256621   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:36.256627   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:36.256683   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:36.293229   72639 cri.go:89] found id: ""
	I1014 15:04:36.293256   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.293265   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:36.293272   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:36.293339   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:36.329254   72639 cri.go:89] found id: ""
	I1014 15:04:36.329279   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.329290   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:36.329297   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:36.329357   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:36.366495   72639 cri.go:89] found id: ""
	I1014 15:04:36.366526   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.366538   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:36.366548   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:36.366561   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:36.420985   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:36.421018   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:36.435532   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:36.435565   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:36.510459   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:36.510484   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:36.510499   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:36.593057   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:36.593094   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:35.637182   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:37.637348   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:37.066334   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:39.566950   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:37.309629   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:39.809500   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:39.138570   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:39.152280   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:39.152342   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:39.186647   72639 cri.go:89] found id: ""
	I1014 15:04:39.186676   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.186687   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:39.186694   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:39.186754   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:39.223560   72639 cri.go:89] found id: ""
	I1014 15:04:39.223586   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.223594   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:39.223599   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:39.223644   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:39.257835   72639 cri.go:89] found id: ""
	I1014 15:04:39.257867   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.257879   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:39.257886   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:39.257947   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:39.294656   72639 cri.go:89] found id: ""
	I1014 15:04:39.294684   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.294692   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:39.294699   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:39.294750   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:39.333474   72639 cri.go:89] found id: ""
	I1014 15:04:39.333503   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.333513   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:39.333520   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:39.333586   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:39.374385   72639 cri.go:89] found id: ""
	I1014 15:04:39.374414   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.374424   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:39.374435   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:39.374483   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:39.412856   72639 cri.go:89] found id: ""
	I1014 15:04:39.412888   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.412899   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:39.412906   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:39.412966   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:39.463087   72639 cri.go:89] found id: ""
	I1014 15:04:39.463115   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.463127   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:39.463138   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:39.463154   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:39.514309   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:39.514342   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:39.528947   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:39.528972   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:39.603984   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:39.604004   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:39.604016   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:39.685053   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:39.685093   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:42.234178   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:42.247421   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:42.247497   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:42.288496   72639 cri.go:89] found id: ""
	I1014 15:04:42.288521   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.288529   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:42.288535   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:42.288588   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:42.324346   72639 cri.go:89] found id: ""
	I1014 15:04:42.324382   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.324394   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:42.324401   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:42.324469   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:42.362879   72639 cri.go:89] found id: ""
	I1014 15:04:42.362910   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.362922   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:42.362928   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:42.362991   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:42.399347   72639 cri.go:89] found id: ""
	I1014 15:04:42.399375   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.399383   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:42.399389   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:42.399473   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:42.434942   72639 cri.go:89] found id: ""
	I1014 15:04:42.434971   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.434990   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:42.434999   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:42.435063   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:42.470886   72639 cri.go:89] found id: ""
	I1014 15:04:42.470916   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.470928   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:42.470934   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:42.470994   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:42.510713   72639 cri.go:89] found id: ""
	I1014 15:04:42.510742   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.510752   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:42.510758   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:42.510820   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:42.544506   72639 cri.go:89] found id: ""
	I1014 15:04:42.544538   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.544547   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:42.544559   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:42.544570   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:42.588658   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:42.588694   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:42.642165   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:42.642198   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:42.658073   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:42.658110   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:42.730486   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:42.730510   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:42.730524   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:39.637476   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:41.637715   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:44.137654   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:42.065534   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:44.066309   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:41.809932   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:44.309377   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:46.309699   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:45.307806   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:45.321664   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:45.321733   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:45.359670   72639 cri.go:89] found id: ""
	I1014 15:04:45.359697   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.359708   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:45.359715   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:45.359781   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:45.398673   72639 cri.go:89] found id: ""
	I1014 15:04:45.398703   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.398715   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:45.398722   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:45.398784   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:45.441656   72639 cri.go:89] found id: ""
	I1014 15:04:45.441685   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.441697   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:45.441705   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:45.441768   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:45.476159   72639 cri.go:89] found id: ""
	I1014 15:04:45.476188   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.476195   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:45.476201   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:45.476263   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:45.513776   72639 cri.go:89] found id: ""
	I1014 15:04:45.513807   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.513819   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:45.513828   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:45.513894   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:45.550336   72639 cri.go:89] found id: ""
	I1014 15:04:45.550371   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.550382   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:45.550388   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:45.550450   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:45.586668   72639 cri.go:89] found id: ""
	I1014 15:04:45.586697   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.586705   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:45.586711   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:45.586760   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:45.622530   72639 cri.go:89] found id: ""
	I1014 15:04:45.622559   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.622568   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:45.622576   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:45.622589   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:45.674471   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:45.674504   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:45.690430   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:45.690463   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:45.772133   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:45.772165   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:45.772181   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:45.859835   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:45.859880   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:46.636239   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:48.637696   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:46.565440   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:48.569076   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:48.309788   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:50.310209   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:48.434011   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:48.448747   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:48.448826   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:48.493642   72639 cri.go:89] found id: ""
	I1014 15:04:48.493668   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.493680   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:48.493687   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:48.493747   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:48.530298   72639 cri.go:89] found id: ""
	I1014 15:04:48.530327   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.530336   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:48.530344   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:48.530403   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:48.566215   72639 cri.go:89] found id: ""
	I1014 15:04:48.566242   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.566252   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:48.566261   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:48.566325   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:48.604528   72639 cri.go:89] found id: ""
	I1014 15:04:48.604553   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.604561   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:48.604566   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:48.604616   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:48.646152   72639 cri.go:89] found id: ""
	I1014 15:04:48.646180   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.646191   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:48.646198   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:48.646257   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:48.682670   72639 cri.go:89] found id: ""
	I1014 15:04:48.682696   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.682704   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:48.682711   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:48.682762   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:48.722292   72639 cri.go:89] found id: ""
	I1014 15:04:48.722318   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.722326   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:48.722335   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:48.722400   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:48.762474   72639 cri.go:89] found id: ""
	I1014 15:04:48.762506   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.762518   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:48.762528   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:48.762553   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:48.776628   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:48.776652   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:48.849904   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:48.849928   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:48.849941   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:48.927033   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:48.927068   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:48.970775   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:48.970807   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:51.521113   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:51.535318   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:51.535389   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:51.582631   72639 cri.go:89] found id: ""
	I1014 15:04:51.582658   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.582666   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:51.582671   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:51.582721   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:51.655323   72639 cri.go:89] found id: ""
	I1014 15:04:51.655362   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.655371   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:51.655376   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:51.655433   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:51.722837   72639 cri.go:89] found id: ""
	I1014 15:04:51.722863   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.722875   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:51.722882   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:51.722939   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:51.759917   72639 cri.go:89] found id: ""
	I1014 15:04:51.759946   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.759957   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:51.759963   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:51.760023   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:51.798656   72639 cri.go:89] found id: ""
	I1014 15:04:51.798689   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.798702   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:51.798711   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:51.798777   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:51.839285   72639 cri.go:89] found id: ""
	I1014 15:04:51.839312   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.839324   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:51.839334   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:51.839391   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:51.876997   72639 cri.go:89] found id: ""
	I1014 15:04:51.877028   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.877038   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:51.877045   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:51.877091   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:51.913991   72639 cri.go:89] found id: ""
	I1014 15:04:51.914020   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.914028   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:51.914036   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:51.914046   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:51.993392   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:51.993427   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:52.039722   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:52.039756   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:52.090901   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:52.090937   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:52.105014   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:52.105052   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:52.175505   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:51.137343   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:53.636660   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:50.575054   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:53.067208   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:52.809933   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:54.810498   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:54.676549   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:54.690113   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:54.690204   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:54.726478   72639 cri.go:89] found id: ""
	I1014 15:04:54.726511   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.726523   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:54.726538   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:54.726611   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:54.764990   72639 cri.go:89] found id: ""
	I1014 15:04:54.765017   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.765025   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:54.765031   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:54.765095   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:54.804779   72639 cri.go:89] found id: ""
	I1014 15:04:54.804808   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.804819   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:54.804828   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:54.804886   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:54.848657   72639 cri.go:89] found id: ""
	I1014 15:04:54.848682   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.848698   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:54.848705   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:54.848765   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:54.886806   72639 cri.go:89] found id: ""
	I1014 15:04:54.886834   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.886845   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:54.886853   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:54.886912   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:54.923297   72639 cri.go:89] found id: ""
	I1014 15:04:54.923323   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.923330   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:54.923335   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:54.923380   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:54.966297   72639 cri.go:89] found id: ""
	I1014 15:04:54.966321   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.966329   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:54.966334   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:54.966382   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:55.012047   72639 cri.go:89] found id: ""
	I1014 15:04:55.012071   72639 logs.go:282] 0 containers: []
	W1014 15:04:55.012079   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:55.012087   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:55.012097   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:55.066031   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:55.066063   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:55.080954   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:55.080981   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:55.159644   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:55.159670   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:55.159683   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:55.243303   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:55.243341   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:57.784555   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:57.799051   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:57.799132   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:57.841084   72639 cri.go:89] found id: ""
	I1014 15:04:57.841108   72639 logs.go:282] 0 containers: []
	W1014 15:04:57.841115   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:57.841121   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:57.841167   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:57.881510   72639 cri.go:89] found id: ""
	I1014 15:04:57.881542   72639 logs.go:282] 0 containers: []
	W1014 15:04:57.881555   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:57.881562   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:57.881624   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:57.916893   72639 cri.go:89] found id: ""
	I1014 15:04:57.916923   72639 logs.go:282] 0 containers: []
	W1014 15:04:57.916934   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:57.916940   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:57.916988   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:57.956991   72639 cri.go:89] found id: ""
	I1014 15:04:57.957023   72639 logs.go:282] 0 containers: []
	W1014 15:04:57.957036   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:57.957046   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:57.957118   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:57.993765   72639 cri.go:89] found id: ""
	I1014 15:04:57.993792   72639 logs.go:282] 0 containers: []
	W1014 15:04:57.993803   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:57.993809   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:57.993869   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:56.136994   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:58.137736   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:55.566021   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:57.567950   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:00.068276   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:57.310643   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:59.808898   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:58.032044   72639 cri.go:89] found id: ""
	I1014 15:04:58.032085   72639 logs.go:282] 0 containers: []
	W1014 15:04:58.032098   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:58.032107   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:58.032173   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:58.069733   72639 cri.go:89] found id: ""
	I1014 15:04:58.069754   72639 logs.go:282] 0 containers: []
	W1014 15:04:58.069762   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:58.069767   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:58.069813   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:58.105851   72639 cri.go:89] found id: ""
	I1014 15:04:58.105880   72639 logs.go:282] 0 containers: []
	W1014 15:04:58.105891   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:58.105901   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:58.105914   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:58.159922   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:58.159956   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:58.173779   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:58.173802   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:58.253551   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:58.253576   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:58.253591   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:58.342607   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:58.342647   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:00.884705   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:00.900147   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:00.900215   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:00.940372   72639 cri.go:89] found id: ""
	I1014 15:05:00.940402   72639 logs.go:282] 0 containers: []
	W1014 15:05:00.940413   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:00.940420   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:00.940489   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:00.981400   72639 cri.go:89] found id: ""
	I1014 15:05:00.981431   72639 logs.go:282] 0 containers: []
	W1014 15:05:00.981441   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:00.981447   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:00.981517   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:01.021981   72639 cri.go:89] found id: ""
	I1014 15:05:01.022002   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.022011   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:01.022016   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:01.022067   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:01.056976   72639 cri.go:89] found id: ""
	I1014 15:05:01.057005   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.057013   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:01.057020   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:01.057063   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:01.092702   72639 cri.go:89] found id: ""
	I1014 15:05:01.092732   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.092739   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:01.092745   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:01.092803   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:01.128861   72639 cri.go:89] found id: ""
	I1014 15:05:01.128892   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.128902   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:01.128908   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:01.128958   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:01.162672   72639 cri.go:89] found id: ""
	I1014 15:05:01.162702   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.162712   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:01.162719   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:01.162791   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:01.202724   72639 cri.go:89] found id: ""
	I1014 15:05:01.202751   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.202761   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:01.202770   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:01.202785   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:01.280702   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:01.280723   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:01.280735   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:01.362909   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:01.362943   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:01.406737   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:01.406766   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:01.460090   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:01.460125   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:00.636730   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:03.136587   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:02.568415   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:05.066568   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:01.809661   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:04.309079   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:06.309544   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:03.975661   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:03.989811   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:03.989874   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:04.028396   72639 cri.go:89] found id: ""
	I1014 15:05:04.028426   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.028438   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:04.028445   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:04.028499   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:04.065871   72639 cri.go:89] found id: ""
	I1014 15:05:04.065901   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.065912   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:04.065919   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:04.065980   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:04.103155   72639 cri.go:89] found id: ""
	I1014 15:05:04.103184   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.103192   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:04.103198   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:04.103248   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:04.139503   72639 cri.go:89] found id: ""
	I1014 15:05:04.139531   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.139539   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:04.139545   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:04.139601   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:04.171638   72639 cri.go:89] found id: ""
	I1014 15:05:04.171663   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.171671   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:04.171676   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:04.171734   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:04.213720   72639 cri.go:89] found id: ""
	I1014 15:05:04.213751   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.213760   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:04.213766   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:04.213815   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:04.248088   72639 cri.go:89] found id: ""
	I1014 15:05:04.248109   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.248117   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:04.248121   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:04.248183   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:04.286454   72639 cri.go:89] found id: ""
	I1014 15:05:04.286479   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.286487   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:04.286495   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:04.286506   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:04.339564   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:04.339599   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:04.353034   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:04.353061   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:04.432764   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:04.432786   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:04.432797   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:04.514561   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:04.514613   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:07.057507   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:07.072798   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:07.072873   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:07.113672   72639 cri.go:89] found id: ""
	I1014 15:05:07.113694   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.113701   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:07.113706   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:07.113761   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:07.149321   72639 cri.go:89] found id: ""
	I1014 15:05:07.149348   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.149357   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:07.149362   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:07.149416   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:07.185717   72639 cri.go:89] found id: ""
	I1014 15:05:07.185748   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.185760   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:07.185768   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:07.185822   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:07.225747   72639 cri.go:89] found id: ""
	I1014 15:05:07.225772   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.225783   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:07.225791   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:07.225843   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:07.265834   72639 cri.go:89] found id: ""
	I1014 15:05:07.265864   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.265875   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:07.265882   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:07.265944   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:07.300595   72639 cri.go:89] found id: ""
	I1014 15:05:07.300622   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.300631   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:07.300637   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:07.300686   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:07.343249   72639 cri.go:89] found id: ""
	I1014 15:05:07.343280   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.343291   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:07.343298   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:07.343365   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:07.379525   72639 cri.go:89] found id: ""
	I1014 15:05:07.379549   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.379557   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:07.379564   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:07.379576   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:07.393622   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:07.393653   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:07.473973   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:07.473998   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:07.474013   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:07.556937   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:07.556971   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:07.602224   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:07.602249   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:05.137157   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:07.137297   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:09.137708   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:07.066795   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:09.566723   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:08.809562   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:11.309821   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:10.156920   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:10.170971   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:10.171037   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:10.206568   72639 cri.go:89] found id: ""
	I1014 15:05:10.206610   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.206623   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:10.206630   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:10.206689   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:10.249075   72639 cri.go:89] found id: ""
	I1014 15:05:10.249101   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.249110   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:10.249121   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:10.249175   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:10.285620   72639 cri.go:89] found id: ""
	I1014 15:05:10.285649   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.285660   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:10.285667   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:10.285730   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:10.322291   72639 cri.go:89] found id: ""
	I1014 15:05:10.322314   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.322322   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:10.322327   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:10.322379   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:10.356691   72639 cri.go:89] found id: ""
	I1014 15:05:10.356720   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.356730   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:10.356738   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:10.356802   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:10.401192   72639 cri.go:89] found id: ""
	I1014 15:05:10.401223   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.401234   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:10.401242   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:10.401303   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:10.438198   72639 cri.go:89] found id: ""
	I1014 15:05:10.438225   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.438236   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:10.438243   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:10.438380   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:10.474142   72639 cri.go:89] found id: ""
	I1014 15:05:10.474166   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.474174   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:10.474181   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:10.474193   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:10.546549   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:10.546569   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:10.546582   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:10.624235   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:10.624268   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:10.664896   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:10.664926   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:10.719425   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:10.719464   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:11.637824   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:14.139552   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:11.566806   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:14.066803   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:13.809728   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:16.310153   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:13.234162   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:13.247614   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:13.247689   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:13.285040   72639 cri.go:89] found id: ""
	I1014 15:05:13.285068   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.285078   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:13.285086   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:13.285154   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:13.334084   72639 cri.go:89] found id: ""
	I1014 15:05:13.334125   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.334133   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:13.334139   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:13.334204   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:13.369164   72639 cri.go:89] found id: ""
	I1014 15:05:13.369199   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.369211   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:13.369223   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:13.369285   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:13.405202   72639 cri.go:89] found id: ""
	I1014 15:05:13.405232   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.405244   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:13.405252   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:13.405304   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:13.443271   72639 cri.go:89] found id: ""
	I1014 15:05:13.443302   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.443311   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:13.443317   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:13.443369   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:13.483541   72639 cri.go:89] found id: ""
	I1014 15:05:13.483570   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.483580   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:13.483588   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:13.483650   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:13.518580   72639 cri.go:89] found id: ""
	I1014 15:05:13.518622   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.518633   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:13.518641   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:13.518701   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:13.553638   72639 cri.go:89] found id: ""
	I1014 15:05:13.553668   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.553678   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:13.553688   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:13.553702   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:13.605379   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:13.605413   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:13.620525   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:13.620556   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:13.699628   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:13.699658   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:13.699672   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:13.778006   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:13.778046   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:16.316703   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:16.331511   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:16.331577   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:16.367045   72639 cri.go:89] found id: ""
	I1014 15:05:16.367075   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.367083   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:16.367089   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:16.367144   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:16.403240   72639 cri.go:89] found id: ""
	I1014 15:05:16.403264   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.403274   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:16.403285   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:16.403344   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:16.438570   72639 cri.go:89] found id: ""
	I1014 15:05:16.438612   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.438625   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:16.438632   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:16.438694   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:16.477153   72639 cri.go:89] found id: ""
	I1014 15:05:16.477174   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.477182   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:16.477187   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:16.477232   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:16.516308   72639 cri.go:89] found id: ""
	I1014 15:05:16.516336   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.516348   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:16.516355   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:16.516421   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:16.551337   72639 cri.go:89] found id: ""
	I1014 15:05:16.551365   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.551375   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:16.551383   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:16.551450   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:16.587073   72639 cri.go:89] found id: ""
	I1014 15:05:16.587105   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.587117   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:16.587125   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:16.587183   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:16.623940   72639 cri.go:89] found id: ""
	I1014 15:05:16.623962   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.623970   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:16.623978   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:16.623989   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:16.671593   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:16.671618   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:16.723057   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:16.723092   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:16.737623   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:16.737656   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:16.809539   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:16.809569   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:16.809592   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:16.636818   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:18.637340   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:16.566523   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:19.065985   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:18.809554   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:21.309691   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:19.390406   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:19.404850   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:19.404928   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:19.446931   72639 cri.go:89] found id: ""
	I1014 15:05:19.446962   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.446973   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:19.446980   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:19.447043   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:19.488112   72639 cri.go:89] found id: ""
	I1014 15:05:19.488136   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.488144   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:19.488150   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:19.488208   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:19.523333   72639 cri.go:89] found id: ""
	I1014 15:05:19.523365   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.523382   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:19.523389   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:19.523447   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:19.557887   72639 cri.go:89] found id: ""
	I1014 15:05:19.557910   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.557918   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:19.557927   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:19.557972   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:19.593792   72639 cri.go:89] found id: ""
	I1014 15:05:19.593815   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.593822   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:19.593873   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:19.593922   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:19.628291   72639 cri.go:89] found id: ""
	I1014 15:05:19.628324   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.628335   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:19.628343   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:19.628405   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:19.664088   72639 cri.go:89] found id: ""
	I1014 15:05:19.664118   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.664130   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:19.664138   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:19.664211   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:19.700825   72639 cri.go:89] found id: ""
	I1014 15:05:19.700853   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.700863   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:19.700873   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:19.700886   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:19.741631   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:19.741666   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:19.792667   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:19.792706   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:19.806928   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:19.806965   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:19.880030   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:19.880059   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:19.880073   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:22.465251   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:22.479031   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:22.479096   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:22.519123   72639 cri.go:89] found id: ""
	I1014 15:05:22.519147   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.519158   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:22.519171   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:22.519235   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:22.552250   72639 cri.go:89] found id: ""
	I1014 15:05:22.552277   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.552287   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:22.552294   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:22.552354   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:22.594213   72639 cri.go:89] found id: ""
	I1014 15:05:22.594243   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.594253   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:22.594261   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:22.594310   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:22.630081   72639 cri.go:89] found id: ""
	I1014 15:05:22.630110   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.630121   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:22.630129   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:22.630195   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:22.665454   72639 cri.go:89] found id: ""
	I1014 15:05:22.665485   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.665497   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:22.665505   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:22.665568   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:22.710697   72639 cri.go:89] found id: ""
	I1014 15:05:22.710725   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.710734   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:22.710742   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:22.710798   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:22.748486   72639 cri.go:89] found id: ""
	I1014 15:05:22.748516   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.748527   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:22.748534   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:22.748594   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:22.784646   72639 cri.go:89] found id: ""
	I1014 15:05:22.784674   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.784684   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:22.784695   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:22.784709   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:22.797853   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:22.797880   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:22.875382   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:22.875406   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:22.875422   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:22.957055   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:22.957089   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:20.638448   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:23.137051   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:21.066950   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:23.566775   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:23.309958   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:25.810168   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:23.008642   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:23.008672   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:25.561277   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:25.575543   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:25.575606   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:25.614260   72639 cri.go:89] found id: ""
	I1014 15:05:25.614283   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.614291   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:25.614296   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:25.614353   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:25.654267   72639 cri.go:89] found id: ""
	I1014 15:05:25.654295   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.654307   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:25.654314   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:25.654385   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:25.707597   72639 cri.go:89] found id: ""
	I1014 15:05:25.707626   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.707637   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:25.707644   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:25.707707   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:25.747477   72639 cri.go:89] found id: ""
	I1014 15:05:25.747500   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.747508   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:25.747513   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:25.747571   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:25.785245   72639 cri.go:89] found id: ""
	I1014 15:05:25.785270   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.785279   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:25.785288   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:25.785342   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:25.820619   72639 cri.go:89] found id: ""
	I1014 15:05:25.820643   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.820651   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:25.820665   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:25.820722   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:25.861644   72639 cri.go:89] found id: ""
	I1014 15:05:25.861665   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.861673   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:25.861678   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:25.861724   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:25.901009   72639 cri.go:89] found id: ""
	I1014 15:05:25.901032   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.901046   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:25.901056   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:25.901068   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:25.942918   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:25.942941   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:25.993931   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:25.993964   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:26.008252   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:26.008280   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:26.087316   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:26.087336   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:26.087347   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:25.636727   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:27.637053   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:26.066529   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:28.567224   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:28.308855   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:30.811310   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:28.667377   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:28.682586   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:28.682682   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:28.729576   72639 cri.go:89] found id: ""
	I1014 15:05:28.729600   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.729608   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:28.729614   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:28.729673   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:28.766637   72639 cri.go:89] found id: ""
	I1014 15:05:28.766669   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.766682   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:28.766690   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:28.766762   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:28.802280   72639 cri.go:89] found id: ""
	I1014 15:05:28.802308   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.802317   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:28.802322   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:28.802395   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:28.840788   72639 cri.go:89] found id: ""
	I1014 15:05:28.840822   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.840833   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:28.840841   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:28.840898   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:28.878403   72639 cri.go:89] found id: ""
	I1014 15:05:28.878437   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.878447   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:28.878453   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:28.878505   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:28.919054   72639 cri.go:89] found id: ""
	I1014 15:05:28.919082   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.919090   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:28.919096   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:28.919146   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:28.955097   72639 cri.go:89] found id: ""
	I1014 15:05:28.955124   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.955134   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:28.955142   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:28.955214   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:28.995681   72639 cri.go:89] found id: ""
	I1014 15:05:28.995711   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.995722   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:28.995731   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:28.995746   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:29.073041   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:29.073066   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:29.073083   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:29.152803   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:29.152838   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:29.192205   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:29.192239   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:29.248128   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:29.248166   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:31.762647   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:31.776372   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:31.776454   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:31.812234   72639 cri.go:89] found id: ""
	I1014 15:05:31.812259   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.812268   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:31.812275   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:31.812347   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:31.850248   72639 cri.go:89] found id: ""
	I1014 15:05:31.850277   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.850294   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:31.850301   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:31.850363   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:31.887768   72639 cri.go:89] found id: ""
	I1014 15:05:31.887796   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.887808   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:31.887816   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:31.887870   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:31.923434   72639 cri.go:89] found id: ""
	I1014 15:05:31.923464   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.923476   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:31.923483   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:31.923547   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:31.961027   72639 cri.go:89] found id: ""
	I1014 15:05:31.961055   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.961066   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:31.961073   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:31.961135   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:31.996222   72639 cri.go:89] found id: ""
	I1014 15:05:31.996250   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.996260   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:31.996267   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:31.996329   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:32.034396   72639 cri.go:89] found id: ""
	I1014 15:05:32.034441   72639 logs.go:282] 0 containers: []
	W1014 15:05:32.034452   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:32.034460   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:32.034528   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:32.080105   72639 cri.go:89] found id: ""
	I1014 15:05:32.080142   72639 logs.go:282] 0 containers: []
	W1014 15:05:32.080153   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:32.080164   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:32.080178   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:32.161120   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:32.161151   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:32.213511   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:32.213546   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:32.271250   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:32.271287   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:32.285452   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:32.285483   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:32.366108   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:30.136896   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:32.138906   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:31.066229   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:33.066370   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:35.067821   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:33.309846   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:35.310713   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:34.867317   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:34.882058   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:34.882125   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:34.926220   72639 cri.go:89] found id: ""
	I1014 15:05:34.926251   72639 logs.go:282] 0 containers: []
	W1014 15:05:34.926261   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:34.926268   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:34.926341   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:34.965657   72639 cri.go:89] found id: ""
	I1014 15:05:34.965691   72639 logs.go:282] 0 containers: []
	W1014 15:05:34.965702   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:34.965709   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:34.965775   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:35.002422   72639 cri.go:89] found id: ""
	I1014 15:05:35.002446   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.002454   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:35.002459   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:35.002523   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:35.040029   72639 cri.go:89] found id: ""
	I1014 15:05:35.040057   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.040067   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:35.040073   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:35.040137   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:35.077041   72639 cri.go:89] found id: ""
	I1014 15:05:35.077067   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.077075   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:35.077080   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:35.077129   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:35.113723   72639 cri.go:89] found id: ""
	I1014 15:05:35.113754   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.113763   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:35.113770   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:35.113854   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:35.152003   72639 cri.go:89] found id: ""
	I1014 15:05:35.152025   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.152033   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:35.152038   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:35.152084   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:35.186707   72639 cri.go:89] found id: ""
	I1014 15:05:35.186735   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.186746   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:35.186756   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:35.186769   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:35.267899   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:35.267941   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:35.310382   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:35.310414   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:35.364811   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:35.364852   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:35.378359   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:35.378386   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:35.453522   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:37.953807   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:37.967515   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:37.967579   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:34.637257   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:37.137643   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:37.566344   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:39.566704   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:37.810414   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:40.308798   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:38.007923   72639 cri.go:89] found id: ""
	I1014 15:05:38.007955   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.007964   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:38.007969   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:38.008023   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:38.047451   72639 cri.go:89] found id: ""
	I1014 15:05:38.047476   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.047484   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:38.047490   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:38.047542   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:38.087141   72639 cri.go:89] found id: ""
	I1014 15:05:38.087165   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.087174   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:38.087186   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:38.087234   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:38.126556   72639 cri.go:89] found id: ""
	I1014 15:05:38.126583   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.126604   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:38.126612   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:38.126670   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:38.165318   72639 cri.go:89] found id: ""
	I1014 15:05:38.165341   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.165350   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:38.165356   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:38.165400   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:38.199498   72639 cri.go:89] found id: ""
	I1014 15:05:38.199533   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.199544   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:38.199553   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:38.199618   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:38.235030   72639 cri.go:89] found id: ""
	I1014 15:05:38.235058   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.235067   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:38.235072   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:38.235129   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:38.268900   72639 cri.go:89] found id: ""
	I1014 15:05:38.268926   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.268935   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:38.268943   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:38.268957   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:38.282503   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:38.282532   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:38.357943   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:38.357972   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:38.357987   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:38.448417   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:38.448453   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:38.490023   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:38.490049   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:41.045691   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:41.061188   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:41.061251   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:41.102885   72639 cri.go:89] found id: ""
	I1014 15:05:41.102909   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.102917   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:41.102923   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:41.102971   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:41.139402   72639 cri.go:89] found id: ""
	I1014 15:05:41.139427   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.139437   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:41.139444   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:41.139501   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:41.179881   72639 cri.go:89] found id: ""
	I1014 15:05:41.179926   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.179939   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:41.179946   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:41.180008   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:41.215861   72639 cri.go:89] found id: ""
	I1014 15:05:41.215897   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.215910   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:41.215919   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:41.215987   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:41.251314   72639 cri.go:89] found id: ""
	I1014 15:05:41.251341   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.251351   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:41.251355   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:41.251404   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:41.285986   72639 cri.go:89] found id: ""
	I1014 15:05:41.286010   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.286017   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:41.286025   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:41.286071   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:41.323730   72639 cri.go:89] found id: ""
	I1014 15:05:41.323756   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.323764   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:41.323769   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:41.323816   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:41.360787   72639 cri.go:89] found id: ""
	I1014 15:05:41.360817   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.360825   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:41.360834   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:41.360847   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:41.403137   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:41.403172   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:41.459217   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:41.459253   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:41.473529   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:41.473558   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:41.547384   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:41.547405   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:41.547416   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:39.637477   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:42.137176   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:41.569245   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:44.066760   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:42.309212   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:44.310281   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:44.129494   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:44.144061   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:44.144129   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:44.185872   72639 cri.go:89] found id: ""
	I1014 15:05:44.185896   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.185904   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:44.185909   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:44.185955   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:44.222618   72639 cri.go:89] found id: ""
	I1014 15:05:44.222648   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.222658   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:44.222663   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:44.222723   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:44.260730   72639 cri.go:89] found id: ""
	I1014 15:05:44.260761   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.260773   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:44.260780   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:44.260872   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:44.303033   72639 cri.go:89] found id: ""
	I1014 15:05:44.303124   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.303141   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:44.303150   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:44.303223   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:44.344573   72639 cri.go:89] found id: ""
	I1014 15:05:44.344600   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.344609   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:44.344614   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:44.344660   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:44.386091   72639 cri.go:89] found id: ""
	I1014 15:05:44.386122   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.386131   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:44.386137   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:44.386199   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:44.424609   72639 cri.go:89] found id: ""
	I1014 15:05:44.424634   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.424644   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:44.424656   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:44.424724   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:44.463997   72639 cri.go:89] found id: ""
	I1014 15:05:44.464023   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.464033   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:44.464043   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:44.464057   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:44.516883   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:44.516921   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:44.530785   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:44.530820   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:44.605202   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:44.605229   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:44.605245   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:44.685277   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:44.685312   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:47.227851   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:47.242737   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:47.242817   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:47.279395   72639 cri.go:89] found id: ""
	I1014 15:05:47.279421   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.279428   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:47.279434   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:47.279495   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:47.315002   72639 cri.go:89] found id: ""
	I1014 15:05:47.315032   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.315043   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:47.315050   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:47.315120   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:47.354133   72639 cri.go:89] found id: ""
	I1014 15:05:47.354162   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.354173   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:47.354180   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:47.354245   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:47.389394   72639 cri.go:89] found id: ""
	I1014 15:05:47.389419   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.389427   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:47.389439   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:47.389498   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:47.426564   72639 cri.go:89] found id: ""
	I1014 15:05:47.426592   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.426619   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:47.426627   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:47.426676   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:47.466953   72639 cri.go:89] found id: ""
	I1014 15:05:47.466980   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.466989   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:47.466996   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:47.467065   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:47.508563   72639 cri.go:89] found id: ""
	I1014 15:05:47.508595   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.508605   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:47.508613   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:47.508665   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:47.548974   72639 cri.go:89] found id: ""
	I1014 15:05:47.549002   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.549012   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:47.549022   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:47.549036   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:47.604768   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:47.604799   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:47.619681   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:47.619717   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:47.692479   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:47.692506   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:47.692522   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:47.773711   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:47.773751   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:44.637916   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:47.137070   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:46.566472   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:48.566743   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:46.809406   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:48.811359   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:51.309691   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:50.314509   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:50.330883   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:50.330958   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:50.375090   72639 cri.go:89] found id: ""
	I1014 15:05:50.375121   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.375133   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:50.375140   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:50.375201   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:50.415000   72639 cri.go:89] found id: ""
	I1014 15:05:50.415031   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.415041   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:50.415048   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:50.415099   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:50.453937   72639 cri.go:89] found id: ""
	I1014 15:05:50.453967   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.453976   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:50.453983   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:50.454047   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:50.498752   72639 cri.go:89] found id: ""
	I1014 15:05:50.498778   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.498785   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:50.498790   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:50.498858   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:50.537819   72639 cri.go:89] found id: ""
	I1014 15:05:50.537855   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.537864   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:50.537871   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:50.537920   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:50.577141   72639 cri.go:89] found id: ""
	I1014 15:05:50.577168   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.577179   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:50.577186   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:50.577250   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:50.612462   72639 cri.go:89] found id: ""
	I1014 15:05:50.612504   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.612527   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:50.612535   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:50.612597   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:50.648816   72639 cri.go:89] found id: ""
	I1014 15:05:50.648845   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.648855   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:50.648866   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:50.648879   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:50.662546   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:50.662578   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:50.733128   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:50.733152   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:50.733166   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:50.810884   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:50.810913   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:50.855878   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:50.855905   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:49.637103   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:52.137615   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:50.567300   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:53.066883   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:53.810090   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:56.312861   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:53.413608   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:53.428380   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:53.428453   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:53.463440   72639 cri.go:89] found id: ""
	I1014 15:05:53.463464   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.463473   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:53.463479   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:53.463534   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:53.499024   72639 cri.go:89] found id: ""
	I1014 15:05:53.499050   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.499058   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:53.499064   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:53.499121   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:53.534396   72639 cri.go:89] found id: ""
	I1014 15:05:53.534425   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.534435   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:53.534442   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:53.534504   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:53.571396   72639 cri.go:89] found id: ""
	I1014 15:05:53.571422   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.571432   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:53.571439   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:53.571496   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:53.606219   72639 cri.go:89] found id: ""
	I1014 15:05:53.606247   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.606254   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:53.606260   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:53.606309   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:53.644906   72639 cri.go:89] found id: ""
	I1014 15:05:53.644929   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.644938   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:53.644945   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:53.645005   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:53.684764   72639 cri.go:89] found id: ""
	I1014 15:05:53.684795   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.684808   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:53.684817   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:53.684872   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:53.720559   72639 cri.go:89] found id: ""
	I1014 15:05:53.720587   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.720596   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:53.720605   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:53.720626   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:53.773759   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:53.773798   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:53.787688   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:53.787717   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:53.863141   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:53.863163   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:53.863176   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:53.942949   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:53.942989   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:56.487207   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:56.500670   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:56.500730   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:56.533851   72639 cri.go:89] found id: ""
	I1014 15:05:56.533882   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.533894   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:56.533901   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:56.533964   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:56.573169   72639 cri.go:89] found id: ""
	I1014 15:05:56.573194   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.573201   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:56.573207   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:56.573260   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:56.608110   72639 cri.go:89] found id: ""
	I1014 15:05:56.608138   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.608151   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:56.608158   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:56.608218   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:56.646030   72639 cri.go:89] found id: ""
	I1014 15:05:56.646054   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.646061   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:56.646067   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:56.646112   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:56.689427   72639 cri.go:89] found id: ""
	I1014 15:05:56.689455   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.689465   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:56.689473   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:56.689528   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:56.723831   72639 cri.go:89] found id: ""
	I1014 15:05:56.723856   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.723865   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:56.723871   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:56.723928   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:56.756700   72639 cri.go:89] found id: ""
	I1014 15:05:56.756725   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.756734   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:56.756741   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:56.756808   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:56.788201   72639 cri.go:89] found id: ""
	I1014 15:05:56.788228   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.788235   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:56.788242   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:56.788253   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:56.847840   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:56.847876   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:56.861984   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:56.862016   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:56.933190   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:56.933214   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:56.933226   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:57.015909   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:57.015958   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:54.636591   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:56.638712   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:59.137008   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:55.566153   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:57.566963   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:00.067261   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:58.810164   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:00.811078   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:59.559421   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:59.575593   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:59.575673   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:59.611369   72639 cri.go:89] found id: ""
	I1014 15:05:59.611399   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.611409   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:59.611416   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:59.611485   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:59.645786   72639 cri.go:89] found id: ""
	I1014 15:05:59.645817   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.645827   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:59.645834   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:59.645895   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:59.681463   72639 cri.go:89] found id: ""
	I1014 15:05:59.681491   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.681499   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:59.681504   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:59.681553   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:59.723738   72639 cri.go:89] found id: ""
	I1014 15:05:59.723767   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.723775   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:59.723782   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:59.723845   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:59.763890   72639 cri.go:89] found id: ""
	I1014 15:05:59.763919   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.763958   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:59.763966   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:59.764027   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:59.802981   72639 cri.go:89] found id: ""
	I1014 15:05:59.803007   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.803015   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:59.803021   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:59.803074   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:59.841887   72639 cri.go:89] found id: ""
	I1014 15:05:59.841916   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.841927   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:59.841934   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:59.841989   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:59.877190   72639 cri.go:89] found id: ""
	I1014 15:05:59.877221   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.877231   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:59.877240   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:59.877254   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:59.890838   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:59.890864   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:59.970122   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:59.970147   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:59.970163   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:00.058994   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:00.059032   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:00.103227   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:00.103262   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:02.655437   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:02.671240   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:02.671307   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:02.708826   72639 cri.go:89] found id: ""
	I1014 15:06:02.708859   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.708871   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:02.708879   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:02.708943   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:02.744504   72639 cri.go:89] found id: ""
	I1014 15:06:02.744535   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.744546   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:02.744553   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:02.744615   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:02.781144   72639 cri.go:89] found id: ""
	I1014 15:06:02.781180   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.781193   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:02.781201   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:02.781281   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:02.819527   72639 cri.go:89] found id: ""
	I1014 15:06:02.819558   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.819567   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:02.819572   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:02.819630   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:02.855653   72639 cri.go:89] found id: ""
	I1014 15:06:02.855683   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.855693   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:02.855700   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:02.855761   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:02.900843   72639 cri.go:89] found id: ""
	I1014 15:06:02.900876   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.900888   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:02.900896   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:02.900961   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:02.941812   72639 cri.go:89] found id: ""
	I1014 15:06:02.941840   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.941851   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:02.941857   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:02.941919   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:02.980213   72639 cri.go:89] found id: ""
	I1014 15:06:02.980238   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.980246   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:02.980253   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:02.980265   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:00.130683   72173 pod_ready.go:82] duration metric: took 4m0.000550021s for pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace to be "Ready" ...
	E1014 15:06:00.130707   72173 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace to be "Ready" (will not retry!)
	I1014 15:06:00.130723   72173 pod_ready.go:39] duration metric: took 4m13.708579322s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:06:00.130753   72173 kubeadm.go:597] duration metric: took 4m21.979284634s to restartPrimaryControlPlane
	W1014 15:06:00.130836   72173 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1014 15:06:00.130870   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 15:06:02.566183   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:05.066638   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:03.309953   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:05.311484   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:03.034263   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:03.034301   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:03.048574   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:03.048606   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:03.121902   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:03.121925   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:03.121939   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:03.197407   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:03.197445   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:05.737723   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:05.751892   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:05.751959   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:05.789209   72639 cri.go:89] found id: ""
	I1014 15:06:05.789235   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.789242   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:05.789247   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:05.789294   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:05.826189   72639 cri.go:89] found id: ""
	I1014 15:06:05.826220   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.826229   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:05.826236   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:05.826344   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:05.864264   72639 cri.go:89] found id: ""
	I1014 15:06:05.864297   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.864308   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:05.864314   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:05.864371   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:05.899697   72639 cri.go:89] found id: ""
	I1014 15:06:05.899724   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.899732   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:05.899737   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:05.899784   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:05.939552   72639 cri.go:89] found id: ""
	I1014 15:06:05.939583   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.939593   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:05.939601   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:05.939668   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:05.999732   72639 cri.go:89] found id: ""
	I1014 15:06:05.999759   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.999770   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:05.999776   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:05.999834   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:06.036228   72639 cri.go:89] found id: ""
	I1014 15:06:06.036259   72639 logs.go:282] 0 containers: []
	W1014 15:06:06.036276   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:06.036284   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:06.036343   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:06.071744   72639 cri.go:89] found id: ""
	I1014 15:06:06.071774   72639 logs.go:282] 0 containers: []
	W1014 15:06:06.071785   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:06.071795   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:06.071808   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:06.125737   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:06.125774   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:06.139150   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:06.139177   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:06.206731   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:06.206757   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:06.206773   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:06.287183   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:06.287218   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:07.565983   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:10.065897   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:07.809832   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:10.309290   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:08.827345   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:08.841290   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:08.841384   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:08.877789   72639 cri.go:89] found id: ""
	I1014 15:06:08.877815   72639 logs.go:282] 0 containers: []
	W1014 15:06:08.877824   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:08.877832   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:08.877895   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:08.912491   72639 cri.go:89] found id: ""
	I1014 15:06:08.912517   72639 logs.go:282] 0 containers: []
	W1014 15:06:08.912525   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:08.912530   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:08.912586   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:08.948727   72639 cri.go:89] found id: ""
	I1014 15:06:08.948755   72639 logs.go:282] 0 containers: []
	W1014 15:06:08.948765   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:08.948773   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:08.948837   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:08.984397   72639 cri.go:89] found id: ""
	I1014 15:06:08.984428   72639 logs.go:282] 0 containers: []
	W1014 15:06:08.984440   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:08.984448   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:08.984498   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:09.019222   72639 cri.go:89] found id: ""
	I1014 15:06:09.019250   72639 logs.go:282] 0 containers: []
	W1014 15:06:09.019260   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:09.019268   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:09.019329   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:09.058309   72639 cri.go:89] found id: ""
	I1014 15:06:09.058335   72639 logs.go:282] 0 containers: []
	W1014 15:06:09.058346   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:09.058353   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:09.058415   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:09.096508   72639 cri.go:89] found id: ""
	I1014 15:06:09.096535   72639 logs.go:282] 0 containers: []
	W1014 15:06:09.096544   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:09.096550   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:09.096599   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:09.134564   72639 cri.go:89] found id: ""
	I1014 15:06:09.134611   72639 logs.go:282] 0 containers: []
	W1014 15:06:09.134624   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:09.134635   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:09.134647   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:09.188220   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:09.188254   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:09.203119   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:09.203149   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:09.279357   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:09.279379   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:09.279390   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:09.364219   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:09.364253   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:11.910976   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:11.926067   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:11.926149   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:11.966238   72639 cri.go:89] found id: ""
	I1014 15:06:11.966271   72639 logs.go:282] 0 containers: []
	W1014 15:06:11.966282   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:11.966289   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:11.966350   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:12.002580   72639 cri.go:89] found id: ""
	I1014 15:06:12.002617   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.002630   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:12.002637   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:12.002698   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:12.037014   72639 cri.go:89] found id: ""
	I1014 15:06:12.037037   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.037046   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:12.037051   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:12.037111   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:12.070937   72639 cri.go:89] found id: ""
	I1014 15:06:12.070957   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.070965   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:12.070970   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:12.071019   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:12.104920   72639 cri.go:89] found id: ""
	I1014 15:06:12.104949   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.104960   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:12.104967   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:12.105026   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:12.142498   72639 cri.go:89] found id: ""
	I1014 15:06:12.142530   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.142544   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:12.142555   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:12.142628   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:12.179590   72639 cri.go:89] found id: ""
	I1014 15:06:12.179613   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.179621   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:12.179627   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:12.179675   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:12.213947   72639 cri.go:89] found id: ""
	I1014 15:06:12.213973   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.213981   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:12.213989   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:12.213998   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:12.268214   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:12.268257   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:12.283561   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:12.283594   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:12.382344   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:12.382367   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:12.382377   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:12.469818   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:12.469854   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:12.066154   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:14.565962   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:12.310167   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:14.810273   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:15.011529   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:15.025355   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:15.025423   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:15.060996   72639 cri.go:89] found id: ""
	I1014 15:06:15.061028   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.061040   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:15.061047   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:15.061120   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:15.103050   72639 cri.go:89] found id: ""
	I1014 15:06:15.103074   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.103082   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:15.103088   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:15.103140   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:15.140095   72639 cri.go:89] found id: ""
	I1014 15:06:15.140122   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.140132   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:15.140139   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:15.140207   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:15.174612   72639 cri.go:89] found id: ""
	I1014 15:06:15.174642   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.174654   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:15.174669   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:15.174737   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:15.209116   72639 cri.go:89] found id: ""
	I1014 15:06:15.209142   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.209152   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:15.209160   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:15.209221   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:15.242857   72639 cri.go:89] found id: ""
	I1014 15:06:15.242885   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.242896   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:15.242902   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:15.242966   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:15.283038   72639 cri.go:89] found id: ""
	I1014 15:06:15.283066   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.283076   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:15.283083   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:15.283144   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:15.319577   72639 cri.go:89] found id: ""
	I1014 15:06:15.319604   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.319612   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:15.319622   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:15.319636   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:15.391485   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:15.391506   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:15.391520   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:15.470140   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:15.470192   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:15.513098   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:15.513132   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:15.568275   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:15.568305   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:17.065956   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:19.566207   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:17.308463   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:19.309185   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:21.310841   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:18.085915   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:18.113889   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:18.113958   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:18.167486   72639 cri.go:89] found id: ""
	I1014 15:06:18.167511   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.167519   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:18.167525   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:18.167568   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:18.230244   72639 cri.go:89] found id: ""
	I1014 15:06:18.230273   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.230283   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:18.230291   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:18.230351   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:18.264223   72639 cri.go:89] found id: ""
	I1014 15:06:18.264252   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.264261   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:18.264268   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:18.264332   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:18.298719   72639 cri.go:89] found id: ""
	I1014 15:06:18.298750   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.298762   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:18.298770   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:18.298843   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:18.335113   72639 cri.go:89] found id: ""
	I1014 15:06:18.335140   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.335147   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:18.335153   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:18.335212   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:18.373690   72639 cri.go:89] found id: ""
	I1014 15:06:18.373721   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.373736   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:18.373743   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:18.373792   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:18.411138   72639 cri.go:89] found id: ""
	I1014 15:06:18.411171   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.411182   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:18.411190   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:18.411250   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:18.451281   72639 cri.go:89] found id: ""
	I1014 15:06:18.451306   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.451314   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:18.451323   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:18.451334   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:18.502141   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:18.502178   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:18.517449   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:18.517476   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:18.586737   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:18.586760   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:18.586776   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:18.670234   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:18.670270   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:21.210200   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:21.222998   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:21.223053   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:21.257132   72639 cri.go:89] found id: ""
	I1014 15:06:21.257160   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.257167   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:21.257174   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:21.257237   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:21.290905   72639 cri.go:89] found id: ""
	I1014 15:06:21.290933   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.290945   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:21.290952   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:21.291007   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:21.331067   72639 cri.go:89] found id: ""
	I1014 15:06:21.331098   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.331108   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:21.331128   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:21.331178   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:21.370042   72639 cri.go:89] found id: ""
	I1014 15:06:21.370069   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.370077   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:21.370083   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:21.370141   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:21.414900   72639 cri.go:89] found id: ""
	I1014 15:06:21.414920   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.414932   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:21.414938   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:21.414985   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:21.452914   72639 cri.go:89] found id: ""
	I1014 15:06:21.452941   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.452952   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:21.452960   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:21.453022   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:21.486725   72639 cri.go:89] found id: ""
	I1014 15:06:21.486752   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.486763   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:21.486770   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:21.486831   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:21.524012   72639 cri.go:89] found id: ""
	I1014 15:06:21.524034   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.524042   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:21.524049   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:21.524059   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:21.603238   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:21.603279   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:21.645655   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:21.645689   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:21.701053   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:21.701092   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:21.715515   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:21.715542   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:21.781831   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:22.067051   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:24.567173   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:21.810342   72390 pod_ready.go:82] duration metric: took 4m0.007657098s for pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace to be "Ready" ...
	E1014 15:06:21.810365   72390 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1014 15:06:21.810382   72390 pod_ready.go:39] duration metric: took 4m7.92113061s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:06:21.810401   72390 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:06:21.810433   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:21.810488   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:21.856565   72390 cri.go:89] found id: "a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f"
	I1014 15:06:21.856587   72390 cri.go:89] found id: ""
	I1014 15:06:21.856594   72390 logs.go:282] 1 containers: [a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f]
	I1014 15:06:21.856654   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:21.861036   72390 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:21.861091   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:21.898486   72390 cri.go:89] found id: "0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69"
	I1014 15:06:21.898517   72390 cri.go:89] found id: ""
	I1014 15:06:21.898528   72390 logs.go:282] 1 containers: [0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69]
	I1014 15:06:21.898587   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:21.903145   72390 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:21.903245   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:21.941127   72390 cri.go:89] found id: "6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1"
	I1014 15:06:21.941164   72390 cri.go:89] found id: ""
	I1014 15:06:21.941173   72390 logs.go:282] 1 containers: [6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1]
	I1014 15:06:21.941232   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:21.945584   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:21.945658   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:21.994370   72390 cri.go:89] found id: "be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa"
	I1014 15:06:21.994398   72390 cri.go:89] found id: ""
	I1014 15:06:21.994407   72390 logs.go:282] 1 containers: [be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa]
	I1014 15:06:21.994454   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:21.998498   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:21.998547   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:22.037415   72390 cri.go:89] found id: "8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42"
	I1014 15:06:22.037443   72390 cri.go:89] found id: ""
	I1014 15:06:22.037453   72390 logs.go:282] 1 containers: [8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42]
	I1014 15:06:22.037507   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:22.041882   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:22.041947   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:22.079219   72390 cri.go:89] found id: "7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4"
	I1014 15:06:22.079243   72390 cri.go:89] found id: ""
	I1014 15:06:22.079252   72390 logs.go:282] 1 containers: [7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4]
	I1014 15:06:22.079319   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:22.083373   72390 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:22.083432   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:22.120795   72390 cri.go:89] found id: ""
	I1014 15:06:22.120818   72390 logs.go:282] 0 containers: []
	W1014 15:06:22.120825   72390 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:22.120832   72390 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1014 15:06:22.120889   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1014 15:06:22.158545   72390 cri.go:89] found id: "54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81"
	I1014 15:06:22.158571   72390 cri.go:89] found id: "48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076"
	I1014 15:06:22.158577   72390 cri.go:89] found id: ""
	I1014 15:06:22.158586   72390 logs.go:282] 2 containers: [54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81 48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076]
	I1014 15:06:22.158662   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:22.162500   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:22.166734   72390 logs.go:123] Gathering logs for storage-provisioner [48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076] ...
	I1014 15:06:22.166759   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076"
	I1014 15:06:22.202711   72390 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:22.202736   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:22.279594   72390 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:22.279635   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:22.293836   72390 logs.go:123] Gathering logs for coredns [6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1] ...
	I1014 15:06:22.293863   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1"
	I1014 15:06:22.335451   72390 logs.go:123] Gathering logs for kube-scheduler [be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa] ...
	I1014 15:06:22.335478   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa"
	I1014 15:06:22.374244   72390 logs.go:123] Gathering logs for kube-proxy [8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42] ...
	I1014 15:06:22.374274   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42"
	I1014 15:06:22.422538   72390 logs.go:123] Gathering logs for kube-controller-manager [7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4] ...
	I1014 15:06:22.422567   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4"
	I1014 15:06:22.486973   72390 logs.go:123] Gathering logs for storage-provisioner [54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81] ...
	I1014 15:06:22.487009   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81"
	I1014 15:06:22.528871   72390 logs.go:123] Gathering logs for container status ...
	I1014 15:06:22.528899   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:22.575947   72390 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:22.575982   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 15:06:22.713356   72390 logs.go:123] Gathering logs for kube-apiserver [a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f] ...
	I1014 15:06:22.713387   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f"
	I1014 15:06:22.760315   72390 logs.go:123] Gathering logs for etcd [0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69] ...
	I1014 15:06:22.760348   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69"
	I1014 15:06:22.811144   72390 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:22.811169   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:25.780847   72390 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:25.800698   72390 api_server.go:72] duration metric: took 4m18.640749756s to wait for apiserver process to appear ...
	I1014 15:06:25.800733   72390 api_server.go:88] waiting for apiserver healthz status ...
	I1014 15:06:25.800779   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:25.800845   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:25.841159   72390 cri.go:89] found id: "a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f"
	I1014 15:06:25.841193   72390 cri.go:89] found id: ""
	I1014 15:06:25.841203   72390 logs.go:282] 1 containers: [a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f]
	I1014 15:06:25.841259   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:25.845503   72390 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:25.845560   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:25.884122   72390 cri.go:89] found id: "0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69"
	I1014 15:06:25.884151   72390 cri.go:89] found id: ""
	I1014 15:06:25.884161   72390 logs.go:282] 1 containers: [0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69]
	I1014 15:06:25.884223   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:25.889638   72390 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:25.889700   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:25.931199   72390 cri.go:89] found id: "6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1"
	I1014 15:06:25.931220   72390 cri.go:89] found id: ""
	I1014 15:06:25.931230   72390 logs.go:282] 1 containers: [6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1]
	I1014 15:06:25.931285   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:25.936063   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:25.936127   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:25.979162   72390 cri.go:89] found id: "be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa"
	I1014 15:06:25.979188   72390 cri.go:89] found id: ""
	I1014 15:06:25.979197   72390 logs.go:282] 1 containers: [be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa]
	I1014 15:06:25.979254   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:25.983550   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:25.983611   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:26.021835   72390 cri.go:89] found id: "8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42"
	I1014 15:06:26.021854   72390 cri.go:89] found id: ""
	I1014 15:06:26.021862   72390 logs.go:282] 1 containers: [8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42]
	I1014 15:06:26.021911   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:26.026005   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:26.026073   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:26.067719   72390 cri.go:89] found id: "7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4"
	I1014 15:06:26.067740   72390 cri.go:89] found id: ""
	I1014 15:06:26.067749   72390 logs.go:282] 1 containers: [7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4]
	I1014 15:06:26.067803   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:26.073387   72390 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:26.073453   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:26.116305   72390 cri.go:89] found id: ""
	I1014 15:06:26.116336   72390 logs.go:282] 0 containers: []
	W1014 15:06:26.116349   72390 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:26.116358   72390 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1014 15:06:26.116427   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1014 15:06:26.156959   72390 cri.go:89] found id: "54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81"
	I1014 15:06:26.156985   72390 cri.go:89] found id: "48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076"
	I1014 15:06:26.156991   72390 cri.go:89] found id: ""
	I1014 15:06:26.156999   72390 logs.go:282] 2 containers: [54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81 48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076]
	I1014 15:06:26.157051   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:26.161437   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:26.165696   72390 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:26.165718   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 15:06:26.282026   72390 logs.go:123] Gathering logs for coredns [6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1] ...
	I1014 15:06:26.282056   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1"
	I1014 15:06:26.333504   72390 logs.go:123] Gathering logs for kube-scheduler [be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa] ...
	I1014 15:06:26.333543   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa"
	I1014 15:06:26.376435   72390 logs.go:123] Gathering logs for storage-provisioner [54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81] ...
	I1014 15:06:26.376469   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81"
	I1014 15:06:26.416633   72390 logs.go:123] Gathering logs for storage-provisioner [48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076] ...
	I1014 15:06:26.416660   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076"
	I1014 15:06:26.388546   72173 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.257645941s)
	I1014 15:06:26.388631   72173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:06:26.407118   72173 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:06:26.417718   72173 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:06:26.428364   72173 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:06:26.428391   72173 kubeadm.go:157] found existing configuration files:
	
	I1014 15:06:26.428451   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:06:26.437953   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:06:26.438026   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:06:26.448356   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:06:26.458476   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:06:26.458541   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:06:26.469941   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:06:26.482934   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:06:26.483016   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:06:26.495682   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:06:26.506113   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:06:26.506176   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:06:26.517784   72173 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 15:06:26.568927   72173 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1014 15:06:26.568978   72173 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 15:06:26.685727   72173 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 15:06:26.685855   72173 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 15:06:26.685963   72173 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 15:06:26.693948   72173 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 15:06:26.696177   72173 out.go:235]   - Generating certificates and keys ...
	I1014 15:06:26.696269   72173 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 15:06:26.696318   72173 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 15:06:26.696388   72173 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 15:06:26.696438   72173 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1014 15:06:26.696495   72173 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 15:06:26.696536   72173 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1014 15:06:26.696588   72173 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1014 15:06:26.696639   72173 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1014 15:06:26.696696   72173 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 15:06:26.696760   72173 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 15:06:26.700275   72173 kubeadm.go:310] [certs] Using the existing "sa" key
	I1014 15:06:26.700406   72173 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 15:06:26.831734   72173 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 15:06:27.336318   72173 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 15:06:27.574604   72173 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 15:06:27.681370   72173 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 15:06:27.788769   72173 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 15:06:27.789324   72173 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 15:06:27.791842   72173 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 15:06:24.282018   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:24.295177   72639 kubeadm.go:597] duration metric: took 4m4.450514459s to restartPrimaryControlPlane
	W1014 15:06:24.295255   72639 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1014 15:06:24.295283   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 15:06:27.793786   72173 out.go:235]   - Booting up control plane ...
	I1014 15:06:27.793891   72173 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 15:06:27.793980   72173 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 15:06:27.794089   72173 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 15:06:27.815223   72173 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 15:06:27.821764   72173 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 15:06:27.821817   72173 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 15:06:27.965327   72173 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 15:06:27.965707   72173 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 15:06:28.967332   72173 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001260991s
	I1014 15:06:28.967473   72173 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1014 15:06:29.238014   72639 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.942706631s)
	I1014 15:06:29.238096   72639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:06:29.258804   72639 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:06:29.269440   72639 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:06:29.279613   72639 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:06:29.279633   72639 kubeadm.go:157] found existing configuration files:
	
	I1014 15:06:29.279672   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:06:29.292840   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:06:29.292912   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:06:29.306987   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:06:29.319896   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:06:29.319970   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:06:29.333974   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:06:29.343993   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:06:29.344051   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:06:29.354691   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:06:29.364354   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:06:29.364422   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:06:29.374674   72639 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 15:06:29.452845   72639 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1014 15:06:29.452961   72639 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 15:06:29.618263   72639 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 15:06:29.618446   72639 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 15:06:29.618582   72639 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1014 15:06:29.813387   72639 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 15:06:29.815501   72639 out.go:235]   - Generating certificates and keys ...
	I1014 15:06:29.815610   72639 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 15:06:29.815697   72639 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 15:06:29.815799   72639 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 15:06:29.815879   72639 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1014 15:06:29.815971   72639 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 15:06:29.816039   72639 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1014 15:06:29.816125   72639 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1014 15:06:29.816206   72639 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1014 15:06:29.816307   72639 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 15:06:29.816404   72639 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 15:06:29.816454   72639 kubeadm.go:310] [certs] Using the existing "sa" key
	I1014 15:06:29.816531   72639 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 15:06:29.944505   72639 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 15:06:30.106467   72639 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 15:06:30.226356   72639 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 15:06:30.322169   72639 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 15:06:30.342382   72639 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 15:06:30.343666   72639 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 15:06:30.343736   72639 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 15:06:30.507000   72639 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 15:06:27.066923   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:29.068434   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:26.453659   72390 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:26.453693   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:26.900485   72390 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:26.900518   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:26.925431   72390 logs.go:123] Gathering logs for kube-apiserver [a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f] ...
	I1014 15:06:26.925461   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f"
	I1014 15:06:26.986104   72390 logs.go:123] Gathering logs for etcd [0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69] ...
	I1014 15:06:26.986140   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69"
	I1014 15:06:27.037557   72390 logs.go:123] Gathering logs for kube-proxy [8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42] ...
	I1014 15:06:27.037600   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42"
	I1014 15:06:27.084362   72390 logs.go:123] Gathering logs for kube-controller-manager [7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4] ...
	I1014 15:06:27.084397   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4"
	I1014 15:06:27.138680   72390 logs.go:123] Gathering logs for container status ...
	I1014 15:06:27.138713   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:27.191283   72390 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:27.191314   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:29.761781   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:06:29.769020   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 200:
	ok
	I1014 15:06:29.770210   72390 api_server.go:141] control plane version: v1.31.1
	I1014 15:06:29.770232   72390 api_server.go:131] duration metric: took 3.969490314s to wait for apiserver health ...
	I1014 15:06:29.770242   72390 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 15:06:29.770268   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:29.770328   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:29.827908   72390 cri.go:89] found id: "a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f"
	I1014 15:06:29.827930   72390 cri.go:89] found id: ""
	I1014 15:06:29.827939   72390 logs.go:282] 1 containers: [a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f]
	I1014 15:06:29.827994   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:29.837786   72390 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:29.837864   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:29.877625   72390 cri.go:89] found id: "0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69"
	I1014 15:06:29.877661   72390 cri.go:89] found id: ""
	I1014 15:06:29.877672   72390 logs.go:282] 1 containers: [0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69]
	I1014 15:06:29.877738   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:29.882502   72390 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:29.882578   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:29.923002   72390 cri.go:89] found id: "6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1"
	I1014 15:06:29.923027   72390 cri.go:89] found id: ""
	I1014 15:06:29.923037   72390 logs.go:282] 1 containers: [6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1]
	I1014 15:06:29.923094   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:29.927559   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:29.927621   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:29.966098   72390 cri.go:89] found id: "be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa"
	I1014 15:06:29.966124   72390 cri.go:89] found id: ""
	I1014 15:06:29.966133   72390 logs.go:282] 1 containers: [be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa]
	I1014 15:06:29.966189   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:29.972287   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:29.972371   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:30.024389   72390 cri.go:89] found id: "8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42"
	I1014 15:06:30.024414   72390 cri.go:89] found id: ""
	I1014 15:06:30.024423   72390 logs.go:282] 1 containers: [8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42]
	I1014 15:06:30.024481   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:30.029914   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:30.029976   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:30.085703   72390 cri.go:89] found id: "7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4"
	I1014 15:06:30.085727   72390 cri.go:89] found id: ""
	I1014 15:06:30.085737   72390 logs.go:282] 1 containers: [7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4]
	I1014 15:06:30.085806   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:30.097004   72390 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:30.097098   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:30.147464   72390 cri.go:89] found id: ""
	I1014 15:06:30.147494   72390 logs.go:282] 0 containers: []
	W1014 15:06:30.147505   72390 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:30.147512   72390 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1014 15:06:30.147573   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1014 15:06:30.195003   72390 cri.go:89] found id: "54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81"
	I1014 15:06:30.195030   72390 cri.go:89] found id: "48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076"
	I1014 15:06:30.195036   72390 cri.go:89] found id: ""
	I1014 15:06:30.195045   72390 logs.go:282] 2 containers: [54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81 48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076]
	I1014 15:06:30.195099   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:30.199436   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:30.204079   72390 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:30.204105   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:30.221021   72390 logs.go:123] Gathering logs for kube-apiserver [a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f] ...
	I1014 15:06:30.221049   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f"
	I1014 15:06:30.280979   72390 logs.go:123] Gathering logs for coredns [6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1] ...
	I1014 15:06:30.281013   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1"
	I1014 15:06:30.339261   72390 logs.go:123] Gathering logs for kube-proxy [8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42] ...
	I1014 15:06:30.339291   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42"
	I1014 15:06:30.390034   72390 logs.go:123] Gathering logs for kube-controller-manager [7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4] ...
	I1014 15:06:30.390081   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4"
	I1014 15:06:30.461221   72390 logs.go:123] Gathering logs for storage-provisioner [54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81] ...
	I1014 15:06:30.461262   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81"
	I1014 15:06:30.504100   72390 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:30.504134   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:30.870561   72390 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:30.870629   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:30.942952   72390 logs.go:123] Gathering logs for container status ...
	I1014 15:06:30.942998   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:30.995435   72390 logs.go:123] Gathering logs for etcd [0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69] ...
	I1014 15:06:30.995484   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69"
	I1014 15:06:31.038804   72390 logs.go:123] Gathering logs for kube-scheduler [be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa] ...
	I1014 15:06:31.038839   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa"
	I1014 15:06:31.080187   72390 logs.go:123] Gathering logs for storage-provisioner [48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076] ...
	I1014 15:06:31.080218   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076"
	I1014 15:06:31.122248   72390 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:31.122295   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 15:06:30.509157   72639 out.go:235]   - Booting up control plane ...
	I1014 15:06:30.509293   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 15:06:30.518440   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 15:06:30.520572   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 15:06:30.522337   72639 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 15:06:30.524996   72639 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1014 15:06:33.742510   72390 system_pods.go:59] 8 kube-system pods found
	I1014 15:06:33.742539   72390 system_pods.go:61] "coredns-7c65d6cfc9-994hx" [b0291ce4-5503-4bb1-8e36-d956b115c3ac] Running
	I1014 15:06:33.742546   72390 system_pods.go:61] "etcd-default-k8s-diff-port-201291" [5e359915-fb2e-46d5-a1a8-826341943fc3] Running
	I1014 15:06:33.742552   72390 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-201291" [047bd813-aaab-428e-ab47-12932195c91f] Running
	I1014 15:06:33.742557   72390 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-201291" [6eb0eb91-21ce-4e56-9758-fbd453b0d4df] Running
	I1014 15:06:33.742562   72390 system_pods.go:61] "kube-proxy-rh82t" [1dcd3c39-1bfe-40ac-a012-ea17ea1dfb6d] Running
	I1014 15:06:33.742566   72390 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-201291" [aaeefd23-6adc-4c69-acca-38e3f3172b2e] Running
	I1014 15:06:33.742576   72390 system_pods.go:61] "metrics-server-6867b74b74-bcrqs" [508697cd-cf31-4078-8985-5c0b77966695] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:06:33.742582   72390 system_pods.go:61] "storage-provisioner" [62925b5e-ec1d-4d5b-aa70-a4fc555db52d] Running
	I1014 15:06:33.742615   72390 system_pods.go:74] duration metric: took 3.972347536s to wait for pod list to return data ...
	I1014 15:06:33.742628   72390 default_sa.go:34] waiting for default service account to be created ...
	I1014 15:06:33.744532   72390 default_sa.go:45] found service account: "default"
	I1014 15:06:33.744551   72390 default_sa.go:55] duration metric: took 1.918153ms for default service account to be created ...
	I1014 15:06:33.744558   72390 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 15:06:33.750292   72390 system_pods.go:86] 8 kube-system pods found
	I1014 15:06:33.750315   72390 system_pods.go:89] "coredns-7c65d6cfc9-994hx" [b0291ce4-5503-4bb1-8e36-d956b115c3ac] Running
	I1014 15:06:33.750320   72390 system_pods.go:89] "etcd-default-k8s-diff-port-201291" [5e359915-fb2e-46d5-a1a8-826341943fc3] Running
	I1014 15:06:33.750324   72390 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-201291" [047bd813-aaab-428e-ab47-12932195c91f] Running
	I1014 15:06:33.750329   72390 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-201291" [6eb0eb91-21ce-4e56-9758-fbd453b0d4df] Running
	I1014 15:06:33.750332   72390 system_pods.go:89] "kube-proxy-rh82t" [1dcd3c39-1bfe-40ac-a012-ea17ea1dfb6d] Running
	I1014 15:06:33.750335   72390 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-201291" [aaeefd23-6adc-4c69-acca-38e3f3172b2e] Running
	I1014 15:06:33.750341   72390 system_pods.go:89] "metrics-server-6867b74b74-bcrqs" [508697cd-cf31-4078-8985-5c0b77966695] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:06:33.750346   72390 system_pods.go:89] "storage-provisioner" [62925b5e-ec1d-4d5b-aa70-a4fc555db52d] Running
	I1014 15:06:33.750352   72390 system_pods.go:126] duration metric: took 5.790549ms to wait for k8s-apps to be running ...
	I1014 15:06:33.750358   72390 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 15:06:33.750398   72390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:06:33.770342   72390 system_svc.go:56] duration metric: took 19.978034ms WaitForService to wait for kubelet
	I1014 15:06:33.770370   72390 kubeadm.go:582] duration metric: took 4m26.610427104s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 15:06:33.770392   72390 node_conditions.go:102] verifying NodePressure condition ...
	I1014 15:06:33.774149   72390 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 15:06:33.774176   72390 node_conditions.go:123] node cpu capacity is 2
	I1014 15:06:33.774190   72390 node_conditions.go:105] duration metric: took 3.792746ms to run NodePressure ...
	I1014 15:06:33.774203   72390 start.go:241] waiting for startup goroutines ...
	I1014 15:06:33.774217   72390 start.go:246] waiting for cluster config update ...
	I1014 15:06:33.774232   72390 start.go:255] writing updated cluster config ...
	I1014 15:06:33.774560   72390 ssh_runner.go:195] Run: rm -f paused
	I1014 15:06:33.823879   72390 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1014 15:06:33.825962   72390 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-201291" cluster and "default" namespace by default
	I1014 15:06:33.976430   72173 kubeadm.go:310] [api-check] The API server is healthy after 5.00773575s
	I1014 15:06:33.990496   72173 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 15:06:34.010821   72173 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 15:06:34.051244   72173 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 15:06:34.051513   72173 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-989166 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 15:06:34.066447   72173 kubeadm.go:310] [bootstrap-token] Using token: 46olqw.t0lfd7bmyz0olhbh
	I1014 15:06:34.067925   72173 out.go:235]   - Configuring RBAC rules ...
	I1014 15:06:34.068073   72173 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 15:06:34.077775   72173 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 15:06:34.097676   72173 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 15:06:34.103212   72173 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 15:06:34.112640   72173 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 15:06:34.119886   72173 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 15:06:34.382372   72173 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 15:06:34.825514   72173 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1014 15:06:35.383856   72173 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1014 15:06:35.383877   72173 kubeadm.go:310] 
	I1014 15:06:35.383939   72173 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1014 15:06:35.383976   72173 kubeadm.go:310] 
	I1014 15:06:35.384094   72173 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1014 15:06:35.384103   72173 kubeadm.go:310] 
	I1014 15:06:35.384136   72173 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1014 15:06:35.384223   72173 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 15:06:35.384286   72173 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 15:06:35.384311   72173 kubeadm.go:310] 
	I1014 15:06:35.384414   72173 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1014 15:06:35.384430   72173 kubeadm.go:310] 
	I1014 15:06:35.384499   72173 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 15:06:35.384512   72173 kubeadm.go:310] 
	I1014 15:06:35.384597   72173 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1014 15:06:35.384685   72173 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 15:06:35.384744   72173 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 15:06:35.384750   72173 kubeadm.go:310] 
	I1014 15:06:35.384821   72173 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 15:06:35.384928   72173 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1014 15:06:35.384940   72173 kubeadm.go:310] 
	I1014 15:06:35.385047   72173 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 46olqw.t0lfd7bmyz0olhbh \
	I1014 15:06:35.385192   72173 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 \
	I1014 15:06:35.385224   72173 kubeadm.go:310] 	--control-plane 
	I1014 15:06:35.385231   72173 kubeadm.go:310] 
	I1014 15:06:35.385322   72173 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1014 15:06:35.385334   72173 kubeadm.go:310] 
	I1014 15:06:35.385449   72173 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 46olqw.t0lfd7bmyz0olhbh \
	I1014 15:06:35.385588   72173 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 
	I1014 15:06:35.386604   72173 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 15:06:35.386674   72173 cni.go:84] Creating CNI manager for ""
	I1014 15:06:35.386689   72173 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:06:35.388617   72173 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 15:06:31.069009   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:33.565864   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:35.390017   72173 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 15:06:35.402242   72173 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 15:06:35.428958   72173 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 15:06:35.429016   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:35.429080   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-989166 minikube.k8s.io/updated_at=2024_10_14T15_06_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=embed-certs-989166 minikube.k8s.io/primary=true
	I1014 15:06:35.475775   72173 ops.go:34] apiserver oom_adj: -16
	I1014 15:06:35.645234   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:36.145613   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:36.646197   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:37.145401   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:37.645956   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:38.145978   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:38.645292   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:39.145444   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:39.646019   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:39.869659   72173 kubeadm.go:1113] duration metric: took 4.440701402s to wait for elevateKubeSystemPrivileges
	I1014 15:06:39.869695   72173 kubeadm.go:394] duration metric: took 5m1.76989803s to StartCluster
	I1014 15:06:39.869713   72173 settings.go:142] acquiring lock: {Name:mk9f6f6b9dc8c3435472077cbb9091b8e648d1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:06:39.869797   72173 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 15:06:39.872564   72173 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/kubeconfig: {Name:mk17dd47b52fd7a8ee17f563c35b22ef1b7788f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:06:39.872947   72173 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 15:06:39.873165   72173 config.go:182] Loaded profile config "embed-certs-989166": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 15:06:39.873085   72173 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 15:06:39.873246   72173 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-989166"
	I1014 15:06:39.873256   72173 addons.go:69] Setting metrics-server=true in profile "embed-certs-989166"
	I1014 15:06:39.873273   72173 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-989166"
	I1014 15:06:39.873272   72173 addons.go:69] Setting default-storageclass=true in profile "embed-certs-989166"
	I1014 15:06:39.873319   72173 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-989166"
	W1014 15:06:39.873282   72173 addons.go:243] addon storage-provisioner should already be in state true
	I1014 15:06:39.873417   72173 host.go:66] Checking if "embed-certs-989166" exists ...
	I1014 15:06:39.873282   72173 addons.go:234] Setting addon metrics-server=true in "embed-certs-989166"
	W1014 15:06:39.873476   72173 addons.go:243] addon metrics-server should already be in state true
	I1014 15:06:39.873504   72173 host.go:66] Checking if "embed-certs-989166" exists ...
	I1014 15:06:39.873875   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.873888   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.873920   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.873947   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.873986   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.874050   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.874921   72173 out.go:177] * Verifying Kubernetes components...
	I1014 15:06:39.876972   72173 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:06:39.893341   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41669
	I1014 15:06:39.893367   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41843
	I1014 15:06:39.893341   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39139
	I1014 15:06:39.893905   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.893915   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.894023   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.894471   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.894493   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.894651   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.894677   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.894713   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.894731   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.894942   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.895073   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.895563   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.895593   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.895778   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.895970   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetState
	I1014 15:06:39.896249   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.896293   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.899661   72173 addons.go:234] Setting addon default-storageclass=true in "embed-certs-989166"
	W1014 15:06:39.899685   72173 addons.go:243] addon default-storageclass should already be in state true
	I1014 15:06:39.899714   72173 host.go:66] Checking if "embed-certs-989166" exists ...
	I1014 15:06:39.900088   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.900131   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.912591   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39805
	I1014 15:06:39.913089   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.913630   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.913652   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.914099   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.914287   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetState
	I1014 15:06:39.914839   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39111
	I1014 15:06:39.915288   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.915783   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.915802   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.916147   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.916171   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:06:39.916382   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetState
	I1014 15:06:39.917766   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:06:39.917796   72173 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:06:39.919192   72173 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1014 15:06:35.567508   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:38.065792   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:40.066618   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:39.919297   72173 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 15:06:39.919320   72173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 15:06:39.919339   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:06:39.920468   72173 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1014 15:06:39.920489   72173 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1014 15:06:39.920507   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:06:39.921603   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45255
	I1014 15:06:39.921970   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.922502   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.922525   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.922994   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.923333   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:06:39.923585   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.923627   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.923826   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:06:39.923846   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:06:39.923876   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:06:39.924028   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:06:39.924157   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:06:39.924270   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:06:39.924291   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:06:39.924310   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:06:39.924397   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:06:39.924674   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:06:39.924840   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:06:39.925027   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:06:39.925201   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:06:39.945435   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40911
	I1014 15:06:39.945958   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.946468   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.946497   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.946855   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.947023   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetState
	I1014 15:06:39.948734   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:06:39.948924   72173 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 15:06:39.948942   72173 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 15:06:39.948966   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:06:39.951019   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:06:39.951418   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:06:39.951437   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:06:39.951570   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:06:39.951742   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:06:39.951918   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:06:39.952058   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:06:40.129893   72173 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:06:40.215427   72173 node_ready.go:35] waiting up to 6m0s for node "embed-certs-989166" to be "Ready" ...
	I1014 15:06:40.224710   72173 node_ready.go:49] node "embed-certs-989166" has status "Ready":"True"
	I1014 15:06:40.224731   72173 node_ready.go:38] duration metric: took 9.266994ms for node "embed-certs-989166" to be "Ready" ...
	I1014 15:06:40.224742   72173 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:06:40.230651   72173 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:40.394829   72173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 15:06:40.422573   72173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 15:06:40.430300   72173 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1014 15:06:40.430319   72173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1014 15:06:40.503826   72173 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1014 15:06:40.503857   72173 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1014 15:06:40.586087   72173 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 15:06:40.586116   72173 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1014 15:06:40.726605   72173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 15:06:40.887453   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:40.887475   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:40.887809   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Closing plugin on server side
	I1014 15:06:40.887857   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:40.887869   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:40.887886   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:40.887898   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:40.888127   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:40.888150   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:40.888160   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Closing plugin on server side
	I1014 15:06:40.901694   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:40.901717   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:40.902091   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:40.902103   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Closing plugin on server side
	I1014 15:06:40.902111   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:41.352636   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:41.352670   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:41.352963   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Closing plugin on server side
	I1014 15:06:41.353017   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:41.353029   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:41.353036   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:41.353043   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:41.353274   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:41.353302   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:41.578200   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:41.578219   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:41.578484   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:41.578529   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:41.578554   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:41.578588   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:41.578827   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:41.578844   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:41.578854   72173 addons.go:475] Verifying addon metrics-server=true in "embed-certs-989166"
	I1014 15:06:41.581312   72173 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1014 15:06:41.582506   72173 addons.go:510] duration metric: took 1.709432803s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1014 15:06:42.237265   72173 pod_ready.go:103] pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:44.240605   72173 pod_ready.go:103] pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:42.067701   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:44.566134   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:46.738094   72173 pod_ready.go:103] pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:48.739238   72173 pod_ready.go:103] pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:49.238145   72173 pod_ready.go:93] pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:49.238167   72173 pod_ready.go:82] duration metric: took 9.007493385s for pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.238176   72173 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-l95hj" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.243268   72173 pod_ready.go:93] pod "coredns-7c65d6cfc9-l95hj" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:49.243299   72173 pod_ready.go:82] duration metric: took 5.116183ms for pod "coredns-7c65d6cfc9-l95hj" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.243311   72173 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.247979   72173 pod_ready.go:93] pod "etcd-embed-certs-989166" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:49.248001   72173 pod_ready.go:82] duration metric: took 4.682826ms for pod "etcd-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.248009   72173 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.252590   72173 pod_ready.go:93] pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:49.252615   72173 pod_ready.go:82] duration metric: took 4.599399ms for pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.252624   72173 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.257541   72173 pod_ready.go:93] pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:49.257566   72173 pod_ready.go:82] duration metric: took 4.935116ms for pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.257575   72173 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g572s" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:47.064934   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:49.066284   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:49.635873   72173 pod_ready.go:93] pod "kube-proxy-g572s" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:49.635895   72173 pod_ready.go:82] duration metric: took 378.313947ms for pod "kube-proxy-g572s" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.635904   72173 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:50.035141   72173 pod_ready.go:93] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:50.035169   72173 pod_ready.go:82] duration metric: took 399.257073ms for pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:50.035179   72173 pod_ready.go:39] duration metric: took 9.810424567s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:06:50.035195   72173 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:06:50.035258   72173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:50.054964   72173 api_server.go:72] duration metric: took 10.181978114s to wait for apiserver process to appear ...
	I1014 15:06:50.054996   72173 api_server.go:88] waiting for apiserver healthz status ...
	I1014 15:06:50.055020   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:06:50.061606   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 200:
	ok
	I1014 15:06:50.063380   72173 api_server.go:141] control plane version: v1.31.1
	I1014 15:06:50.063411   72173 api_server.go:131] duration metric: took 8.40661ms to wait for apiserver health ...
	I1014 15:06:50.063421   72173 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 15:06:50.239258   72173 system_pods.go:59] 9 kube-system pods found
	I1014 15:06:50.239286   72173 system_pods.go:61] "coredns-7c65d6cfc9-6bmwg" [7cf9ad75-b75b-4cce-aad8-d68a810a5d0a] Running
	I1014 15:06:50.239292   72173 system_pods.go:61] "coredns-7c65d6cfc9-l95hj" [6563de05-ef49-4fa9-bf0b-a826fbc8bb14] Running
	I1014 15:06:50.239295   72173 system_pods.go:61] "etcd-embed-certs-989166" [8d29b784-a336-4cb9-ac24-3e9e129e4f49] Running
	I1014 15:06:50.239299   72173 system_pods.go:61] "kube-apiserver-embed-certs-989166" [a98c0a3d-0fd7-4f02-8d61-93f8cada740e] Running
	I1014 15:06:50.239303   72173 system_pods.go:61] "kube-controller-manager-embed-certs-989166" [e3146331-cd59-4a34-8ca8-c9637acdb687] Running
	I1014 15:06:50.239305   72173 system_pods.go:61] "kube-proxy-g572s" [5d2e4a08-5d05-48ab-8fbe-3bb3fe2f77ab] Running
	I1014 15:06:50.239308   72173 system_pods.go:61] "kube-scheduler-embed-certs-989166" [fd61dc8f-51aa-43ce-8e6b-8be0c50073fc] Running
	I1014 15:06:50.239314   72173 system_pods.go:61] "metrics-server-6867b74b74-jl6pp" [c244e53d-c492-426a-be7f-d405f2defd17] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:06:50.239317   72173 system_pods.go:61] "storage-provisioner" [ad6caa59-bc75-4e8f-8052-86d963b92fe3] Running
	I1014 15:06:50.239325   72173 system_pods.go:74] duration metric: took 175.89649ms to wait for pod list to return data ...
	I1014 15:06:50.239334   72173 default_sa.go:34] waiting for default service account to be created ...
	I1014 15:06:50.435980   72173 default_sa.go:45] found service account: "default"
	I1014 15:06:50.436007   72173 default_sa.go:55] duration metric: took 196.667838ms for default service account to be created ...
	I1014 15:06:50.436017   72173 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 15:06:50.639185   72173 system_pods.go:86] 9 kube-system pods found
	I1014 15:06:50.639224   72173 system_pods.go:89] "coredns-7c65d6cfc9-6bmwg" [7cf9ad75-b75b-4cce-aad8-d68a810a5d0a] Running
	I1014 15:06:50.639234   72173 system_pods.go:89] "coredns-7c65d6cfc9-l95hj" [6563de05-ef49-4fa9-bf0b-a826fbc8bb14] Running
	I1014 15:06:50.639241   72173 system_pods.go:89] "etcd-embed-certs-989166" [8d29b784-a336-4cb9-ac24-3e9e129e4f49] Running
	I1014 15:06:50.639248   72173 system_pods.go:89] "kube-apiserver-embed-certs-989166" [a98c0a3d-0fd7-4f02-8d61-93f8cada740e] Running
	I1014 15:06:50.639254   72173 system_pods.go:89] "kube-controller-manager-embed-certs-989166" [e3146331-cd59-4a34-8ca8-c9637acdb687] Running
	I1014 15:06:50.639262   72173 system_pods.go:89] "kube-proxy-g572s" [5d2e4a08-5d05-48ab-8fbe-3bb3fe2f77ab] Running
	I1014 15:06:50.639269   72173 system_pods.go:89] "kube-scheduler-embed-certs-989166" [fd61dc8f-51aa-43ce-8e6b-8be0c50073fc] Running
	I1014 15:06:50.639283   72173 system_pods.go:89] "metrics-server-6867b74b74-jl6pp" [c244e53d-c492-426a-be7f-d405f2defd17] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:06:50.639295   72173 system_pods.go:89] "storage-provisioner" [ad6caa59-bc75-4e8f-8052-86d963b92fe3] Running
	I1014 15:06:50.639309   72173 system_pods.go:126] duration metric: took 203.286322ms to wait for k8s-apps to be running ...
	I1014 15:06:50.639327   72173 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 15:06:50.639388   72173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:06:50.655377   72173 system_svc.go:56] duration metric: took 16.0447ms WaitForService to wait for kubelet
	I1014 15:06:50.655402   72173 kubeadm.go:582] duration metric: took 10.782421893s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 15:06:50.655425   72173 node_conditions.go:102] verifying NodePressure condition ...
	I1014 15:06:50.835507   72173 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 15:06:50.835543   72173 node_conditions.go:123] node cpu capacity is 2
	I1014 15:06:50.835556   72173 node_conditions.go:105] duration metric: took 180.126755ms to run NodePressure ...
	I1014 15:06:50.835570   72173 start.go:241] waiting for startup goroutines ...
	I1014 15:06:50.835580   72173 start.go:246] waiting for cluster config update ...
	I1014 15:06:50.835594   72173 start.go:255] writing updated cluster config ...
	I1014 15:06:50.835924   72173 ssh_runner.go:195] Run: rm -f paused
	I1014 15:06:50.883737   72173 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1014 15:06:50.886200   72173 out.go:177] * Done! kubectl is now configured to use "embed-certs-989166" cluster and "default" namespace by default
	I1014 15:06:51.066344   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:53.566466   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:56.066734   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:58.567007   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:01.066112   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:03.068758   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:05.566174   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:07.566274   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:09.566829   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:10.525694   72639 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1014 15:07:10.526665   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:07:10.526908   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:07:12.066402   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:13.560638   71679 pod_ready.go:82] duration metric: took 4m0.000980901s for pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace to be "Ready" ...
	E1014 15:07:13.560669   71679 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace to be "Ready" (will not retry!)
	I1014 15:07:13.560693   71679 pod_ready.go:39] duration metric: took 4m13.04495779s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:07:13.560725   71679 kubeadm.go:597] duration metric: took 4m21.006404411s to restartPrimaryControlPlane
	W1014 15:07:13.560791   71679 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1014 15:07:13.560823   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 15:07:15.527128   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:07:15.527376   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:07:25.527779   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:07:25.528060   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:07:39.775370   71679 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.214519412s)
	I1014 15:07:39.775448   71679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:07:39.790736   71679 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:07:39.800575   71679 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:07:39.810380   71679 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:07:39.810402   71679 kubeadm.go:157] found existing configuration files:
	
	I1014 15:07:39.810462   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:07:39.819880   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:07:39.819938   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:07:39.830542   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:07:39.840268   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:07:39.840318   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:07:39.849727   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:07:39.858513   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:07:39.858651   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:07:39.869154   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:07:39.878724   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:07:39.878798   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:07:39.888123   71679 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 15:07:39.942676   71679 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1014 15:07:39.942771   71679 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 15:07:40.060558   71679 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 15:07:40.060698   71679 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 15:07:40.060861   71679 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 15:07:40.076085   71679 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 15:07:40.078200   71679 out.go:235]   - Generating certificates and keys ...
	I1014 15:07:40.078301   71679 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 15:07:40.078381   71679 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 15:07:40.078505   71679 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 15:07:40.078620   71679 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1014 15:07:40.078717   71679 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 15:07:40.078794   71679 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1014 15:07:40.078887   71679 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1014 15:07:40.078973   71679 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1014 15:07:40.079069   71679 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 15:07:40.079161   71679 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 15:07:40.079234   71679 kubeadm.go:310] [certs] Using the existing "sa" key
	I1014 15:07:40.079315   71679 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 15:07:40.177082   71679 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 15:07:40.264965   71679 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 15:07:40.415660   71679 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 15:07:40.556759   71679 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 15:07:40.727152   71679 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 15:07:40.727573   71679 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 15:07:40.730409   71679 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 15:07:40.732204   71679 out.go:235]   - Booting up control plane ...
	I1014 15:07:40.732328   71679 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 15:07:40.732440   71679 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 15:07:40.732529   71679 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 15:07:40.751839   71679 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 15:07:40.758034   71679 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 15:07:40.758095   71679 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 15:07:40.895135   71679 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 15:07:40.895254   71679 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 15:07:41.397066   71679 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.194797ms
	I1014 15:07:41.397209   71679 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1014 15:07:46.401247   71679 kubeadm.go:310] [api-check] The API server is healthy after 5.002197966s
	I1014 15:07:46.419134   71679 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 15:07:46.433128   71679 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 15:07:46.477079   71679 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 15:07:46.477289   71679 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-813300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 15:07:46.492703   71679 kubeadm.go:310] [bootstrap-token] Using token: 1vsv04.mf3pqj2ow157sq8h
	I1014 15:07:46.494314   71679 out.go:235]   - Configuring RBAC rules ...
	I1014 15:07:46.494467   71679 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 15:07:46.501090   71679 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 15:07:46.515987   71679 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 15:07:46.522417   71679 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 15:07:46.528612   71679 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 15:07:46.536975   71679 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 15:07:46.810642   71679 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 15:07:47.240531   71679 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1014 15:07:47.810279   71679 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1014 15:07:47.811169   71679 kubeadm.go:310] 
	I1014 15:07:47.811230   71679 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1014 15:07:47.811238   71679 kubeadm.go:310] 
	I1014 15:07:47.811307   71679 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1014 15:07:47.811312   71679 kubeadm.go:310] 
	I1014 15:07:47.811335   71679 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1014 15:07:47.811388   71679 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 15:07:47.811440   71679 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 15:07:47.811447   71679 kubeadm.go:310] 
	I1014 15:07:47.811501   71679 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1014 15:07:47.811507   71679 kubeadm.go:310] 
	I1014 15:07:47.811546   71679 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 15:07:47.811553   71679 kubeadm.go:310] 
	I1014 15:07:47.811600   71679 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1014 15:07:47.811667   71679 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 15:07:47.811755   71679 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 15:07:47.811771   71679 kubeadm.go:310] 
	I1014 15:07:47.811844   71679 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 15:07:47.811912   71679 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1014 15:07:47.811921   71679 kubeadm.go:310] 
	I1014 15:07:47.811999   71679 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1vsv04.mf3pqj2ow157sq8h \
	I1014 15:07:47.812091   71679 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 \
	I1014 15:07:47.812139   71679 kubeadm.go:310] 	--control-plane 
	I1014 15:07:47.812153   71679 kubeadm.go:310] 
	I1014 15:07:47.812231   71679 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1014 15:07:47.812238   71679 kubeadm.go:310] 
	I1014 15:07:47.812306   71679 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1vsv04.mf3pqj2ow157sq8h \
	I1014 15:07:47.812393   71679 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 
	I1014 15:07:47.814071   71679 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 15:07:47.814103   71679 cni.go:84] Creating CNI manager for ""
	I1014 15:07:47.814113   71679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:07:47.816033   71679 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 15:07:45.528527   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:07:45.528768   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:07:47.817325   71679 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 15:07:47.829639   71679 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 15:07:47.847797   71679 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 15:07:47.847857   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:47.847929   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-813300 minikube.k8s.io/updated_at=2024_10_14T15_07_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=no-preload-813300 minikube.k8s.io/primary=true
	I1014 15:07:48.039959   71679 ops.go:34] apiserver oom_adj: -16
	I1014 15:07:48.040095   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:48.540295   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:49.040911   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:49.540233   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:50.040146   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:50.540494   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:51.041033   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:51.540516   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:52.040935   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:52.146854   71679 kubeadm.go:1113] duration metric: took 4.299055033s to wait for elevateKubeSystemPrivileges
	I1014 15:07:52.146890   71679 kubeadm.go:394] duration metric: took 4m59.642546726s to StartCluster
	I1014 15:07:52.146906   71679 settings.go:142] acquiring lock: {Name:mk9f6f6b9dc8c3435472077cbb9091b8e648d1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:07:52.146987   71679 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 15:07:52.148782   71679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/kubeconfig: {Name:mk17dd47b52fd7a8ee17f563c35b22ef1b7788f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:07:52.149067   71679 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.13 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 15:07:52.149168   71679 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 15:07:52.149303   71679 addons.go:69] Setting storage-provisioner=true in profile "no-preload-813300"
	I1014 15:07:52.149333   71679 addons.go:234] Setting addon storage-provisioner=true in "no-preload-813300"
	I1014 15:07:52.149342   71679 config.go:182] Loaded profile config "no-preload-813300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	W1014 15:07:52.149355   71679 addons.go:243] addon storage-provisioner should already be in state true
	I1014 15:07:52.149378   71679 addons.go:69] Setting default-storageclass=true in profile "no-preload-813300"
	I1014 15:07:52.149390   71679 host.go:66] Checking if "no-preload-813300" exists ...
	I1014 15:07:52.149412   71679 addons.go:69] Setting metrics-server=true in profile "no-preload-813300"
	I1014 15:07:52.149447   71679 addons.go:234] Setting addon metrics-server=true in "no-preload-813300"
	W1014 15:07:52.149461   71679 addons.go:243] addon metrics-server should already be in state true
	I1014 15:07:52.149494   71679 host.go:66] Checking if "no-preload-813300" exists ...
	I1014 15:07:52.149421   71679 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-813300"
	I1014 15:07:52.149748   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.149789   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.149861   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.149890   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.149905   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.149928   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.150482   71679 out.go:177] * Verifying Kubernetes components...
	I1014 15:07:52.152252   71679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:07:52.167205   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38341
	I1014 15:07:52.170723   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45457
	I1014 15:07:52.170742   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.170728   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39829
	I1014 15:07:52.171111   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.171302   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.171321   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.171386   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.171678   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.171702   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.171717   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.171900   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.171916   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.172164   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.172243   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.172279   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.172325   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.172386   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetState
	I1014 15:07:52.172868   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.172916   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.175482   71679 addons.go:234] Setting addon default-storageclass=true in "no-preload-813300"
	W1014 15:07:52.175502   71679 addons.go:243] addon default-storageclass should already be in state true
	I1014 15:07:52.175529   71679 host.go:66] Checking if "no-preload-813300" exists ...
	I1014 15:07:52.175763   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.175792   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.190835   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46633
	I1014 15:07:52.191422   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.191767   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39613
	I1014 15:07:52.191901   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35293
	I1014 15:07:52.192010   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.192027   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.192317   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.192436   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.192481   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.192988   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.193010   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.192992   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.193060   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.193474   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.193524   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.193530   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.193563   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.193729   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetState
	I1014 15:07:52.193770   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetState
	I1014 15:07:52.195702   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:07:52.195770   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:07:52.197642   71679 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1014 15:07:52.197652   71679 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:07:52.198957   71679 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1014 15:07:52.198978   71679 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1014 15:07:52.198998   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:07:52.199075   71679 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 15:07:52.199096   71679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 15:07:52.199111   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:07:52.202637   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:07:52.203064   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:07:52.203088   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:07:52.203245   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:07:52.203515   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:07:52.203519   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:07:52.203663   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:07:52.203812   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:07:52.203878   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:07:52.203903   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:07:52.204187   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:07:52.204377   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:07:52.204535   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:07:52.204683   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:07:52.231332   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38395
	I1014 15:07:52.231813   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.232320   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.232344   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.232645   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.232836   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetState
	I1014 15:07:52.234309   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:07:52.234570   71679 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 15:07:52.234585   71679 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 15:07:52.234622   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:07:52.237749   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:07:52.238364   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:07:52.238393   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:07:52.238562   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:07:52.238744   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:07:52.238903   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:07:52.239031   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:07:52.375830   71679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:07:52.401606   71679 node_ready.go:35] waiting up to 6m0s for node "no-preload-813300" to be "Ready" ...
	I1014 15:07:52.431363   71679 node_ready.go:49] node "no-preload-813300" has status "Ready":"True"
	I1014 15:07:52.431393   71679 node_ready.go:38] duration metric: took 29.758277ms for node "no-preload-813300" to be "Ready" ...
	I1014 15:07:52.431405   71679 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:07:52.446747   71679 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fjzn8" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:52.501642   71679 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1014 15:07:52.501664   71679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1014 15:07:52.509733   71679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 15:07:52.515833   71679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 15:07:52.536485   71679 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1014 15:07:52.536508   71679 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1014 15:07:52.622269   71679 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 15:07:52.622299   71679 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1014 15:07:52.702873   71679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 15:07:52.909827   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:52.909865   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:52.910194   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:52.910209   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:52.910235   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:52.910249   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:52.910510   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:52.910525   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:52.918161   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:52.918182   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:52.918473   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:52.918493   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:52.918480   71679 main.go:141] libmachine: (no-preload-813300) DBG | Closing plugin on server side
	I1014 15:07:53.707659   71679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.191781585s)
	I1014 15:07:53.707706   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:53.707719   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:53.708011   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:53.708035   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:53.708052   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:53.708062   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:53.708330   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:53.708346   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:54.060665   71679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.357747934s)
	I1014 15:07:54.060752   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:54.060770   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:54.061069   71679 main.go:141] libmachine: (no-preload-813300) DBG | Closing plugin on server side
	I1014 15:07:54.061153   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:54.061164   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:54.061173   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:54.061184   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:54.062712   71679 main.go:141] libmachine: (no-preload-813300) DBG | Closing plugin on server side
	I1014 15:07:54.062787   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:54.062797   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:54.062811   71679 addons.go:475] Verifying addon metrics-server=true in "no-preload-813300"
	I1014 15:07:54.064762   71679 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1014 15:07:54.066623   71679 addons.go:510] duration metric: took 1.917465271s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1014 15:07:54.454216   71679 pod_ready.go:103] pod "coredns-7c65d6cfc9-fjzn8" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:56.455649   71679 pod_ready.go:93] pod "coredns-7c65d6cfc9-fjzn8" in "kube-system" namespace has status "Ready":"True"
	I1014 15:07:56.455674   71679 pod_ready.go:82] duration metric: took 4.00889709s for pod "coredns-7c65d6cfc9-fjzn8" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:56.455689   71679 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nvpvl" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:58.461687   71679 pod_ready.go:103] pod "coredns-7c65d6cfc9-nvpvl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:59.962360   71679 pod_ready.go:93] pod "coredns-7c65d6cfc9-nvpvl" in "kube-system" namespace has status "Ready":"True"
	I1014 15:07:59.962382   71679 pod_ready.go:82] duration metric: took 3.506686516s for pod "coredns-7c65d6cfc9-nvpvl" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.962391   71679 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.969241   71679 pod_ready.go:93] pod "etcd-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:07:59.969261   71679 pod_ready.go:82] duration metric: took 6.864356ms for pod "etcd-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.969270   71679 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.974810   71679 pod_ready.go:93] pod "kube-apiserver-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:07:59.974828   71679 pod_ready.go:82] duration metric: took 5.552122ms for pod "kube-apiserver-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.974837   71679 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.979555   71679 pod_ready.go:93] pod "kube-controller-manager-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:07:59.979580   71679 pod_ready.go:82] duration metric: took 4.735265ms for pod "kube-controller-manager-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.979592   71679 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-54rrd" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.985111   71679 pod_ready.go:93] pod "kube-proxy-54rrd" in "kube-system" namespace has status "Ready":"True"
	I1014 15:07:59.985138   71679 pod_ready.go:82] duration metric: took 5.538126ms for pod "kube-proxy-54rrd" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.985150   71679 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:08:00.359524   71679 pod_ready.go:93] pod "kube-scheduler-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:08:00.359548   71679 pod_ready.go:82] duration metric: took 374.389838ms for pod "kube-scheduler-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:08:00.359558   71679 pod_ready.go:39] duration metric: took 7.928141116s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:08:00.359575   71679 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:08:00.359626   71679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:08:00.376115   71679 api_server.go:72] duration metric: took 8.22700683s to wait for apiserver process to appear ...
	I1014 15:08:00.376144   71679 api_server.go:88] waiting for apiserver healthz status ...
	I1014 15:08:00.376169   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:08:00.381225   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 200:
	ok
	I1014 15:08:00.382348   71679 api_server.go:141] control plane version: v1.31.1
	I1014 15:08:00.382377   71679 api_server.go:131] duration metric: took 6.225832ms to wait for apiserver health ...
	I1014 15:08:00.382386   71679 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 15:08:00.563350   71679 system_pods.go:59] 9 kube-system pods found
	I1014 15:08:00.563382   71679 system_pods.go:61] "coredns-7c65d6cfc9-fjzn8" [7850936e-8104-4e8f-a4cc-948579963790] Running
	I1014 15:08:00.563386   71679 system_pods.go:61] "coredns-7c65d6cfc9-nvpvl" [d926987d-9c61-4bf6-83e3-97334715e1d5] Running
	I1014 15:08:00.563390   71679 system_pods.go:61] "etcd-no-preload-813300" [e5895ac5-7829-4d8c-b5be-d621dbba78bd] Running
	I1014 15:08:00.563394   71679 system_pods.go:61] "kube-apiserver-no-preload-813300" [a30389db-98c0-49e3-8a9b-f3414e62c09a] Running
	I1014 15:08:00.563399   71679 system_pods.go:61] "kube-controller-manager-no-preload-813300" [f710bd35-f215-4aa1-96a9-fb5be44d04cc] Running
	I1014 15:08:00.563402   71679 system_pods.go:61] "kube-proxy-54rrd" [0c8ab0de-c204-46f5-a725-5dcd9eff59d8] Running
	I1014 15:08:00.563405   71679 system_pods.go:61] "kube-scheduler-no-preload-813300" [5386153a-f569-4332-b448-2a000f7a16bb] Running
	I1014 15:08:00.563412   71679 system_pods.go:61] "metrics-server-6867b74b74-8vfll" [cf3594da-9896-49ed-b47f-5bbea36c9aaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:08:00.563416   71679 system_pods.go:61] "storage-provisioner" [2d79bfdf-bda5-42bf-8ddf-73d7df4855db] Running
	I1014 15:08:00.563424   71679 system_pods.go:74] duration metric: took 181.032852ms to wait for pod list to return data ...
	I1014 15:08:00.563436   71679 default_sa.go:34] waiting for default service account to be created ...
	I1014 15:08:00.760054   71679 default_sa.go:45] found service account: "default"
	I1014 15:08:00.760084   71679 default_sa.go:55] duration metric: took 196.637678ms for default service account to be created ...
	I1014 15:08:00.760095   71679 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 15:08:00.962545   71679 system_pods.go:86] 9 kube-system pods found
	I1014 15:08:00.962577   71679 system_pods.go:89] "coredns-7c65d6cfc9-fjzn8" [7850936e-8104-4e8f-a4cc-948579963790] Running
	I1014 15:08:00.962583   71679 system_pods.go:89] "coredns-7c65d6cfc9-nvpvl" [d926987d-9c61-4bf6-83e3-97334715e1d5] Running
	I1014 15:08:00.962587   71679 system_pods.go:89] "etcd-no-preload-813300" [e5895ac5-7829-4d8c-b5be-d621dbba78bd] Running
	I1014 15:08:00.962591   71679 system_pods.go:89] "kube-apiserver-no-preload-813300" [a30389db-98c0-49e3-8a9b-f3414e62c09a] Running
	I1014 15:08:00.962605   71679 system_pods.go:89] "kube-controller-manager-no-preload-813300" [f710bd35-f215-4aa1-96a9-fb5be44d04cc] Running
	I1014 15:08:00.962609   71679 system_pods.go:89] "kube-proxy-54rrd" [0c8ab0de-c204-46f5-a725-5dcd9eff59d8] Running
	I1014 15:08:00.962613   71679 system_pods.go:89] "kube-scheduler-no-preload-813300" [5386153a-f569-4332-b448-2a000f7a16bb] Running
	I1014 15:08:00.962619   71679 system_pods.go:89] "metrics-server-6867b74b74-8vfll" [cf3594da-9896-49ed-b47f-5bbea36c9aaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:08:00.962623   71679 system_pods.go:89] "storage-provisioner" [2d79bfdf-bda5-42bf-8ddf-73d7df4855db] Running
	I1014 15:08:00.962633   71679 system_pods.go:126] duration metric: took 202.532202ms to wait for k8s-apps to be running ...
	I1014 15:08:00.962640   71679 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 15:08:00.962682   71679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:08:00.980272   71679 system_svc.go:56] duration metric: took 17.624381ms WaitForService to wait for kubelet
	I1014 15:08:00.980310   71679 kubeadm.go:582] duration metric: took 8.831207019s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 15:08:00.980333   71679 node_conditions.go:102] verifying NodePressure condition ...
	I1014 15:08:01.160914   71679 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 15:08:01.160947   71679 node_conditions.go:123] node cpu capacity is 2
	I1014 15:08:01.160961   71679 node_conditions.go:105] duration metric: took 180.622279ms to run NodePressure ...
	I1014 15:08:01.160976   71679 start.go:241] waiting for startup goroutines ...
	I1014 15:08:01.160985   71679 start.go:246] waiting for cluster config update ...
	I1014 15:08:01.161000   71679 start.go:255] writing updated cluster config ...
	I1014 15:08:01.161357   71679 ssh_runner.go:195] Run: rm -f paused
	I1014 15:08:01.212486   71679 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1014 15:08:01.215083   71679 out.go:177] * Done! kubectl is now configured to use "no-preload-813300" cluster and "default" namespace by default
	I1014 15:08:25.530669   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:08:25.530970   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:08:25.530998   72639 kubeadm.go:310] 
	I1014 15:08:25.531059   72639 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1014 15:08:25.531114   72639 kubeadm.go:310] 		timed out waiting for the condition
	I1014 15:08:25.531125   72639 kubeadm.go:310] 
	I1014 15:08:25.531177   72639 kubeadm.go:310] 	This error is likely caused by:
	I1014 15:08:25.531238   72639 kubeadm.go:310] 		- The kubelet is not running
	I1014 15:08:25.531381   72639 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1014 15:08:25.531392   72639 kubeadm.go:310] 
	I1014 15:08:25.531527   72639 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1014 15:08:25.531587   72639 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1014 15:08:25.531633   72639 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1014 15:08:25.531643   72639 kubeadm.go:310] 
	I1014 15:08:25.531766   72639 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1014 15:08:25.531872   72639 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 15:08:25.531891   72639 kubeadm.go:310] 
	I1014 15:08:25.532038   72639 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1014 15:08:25.532174   72639 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 15:08:25.532281   72639 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1014 15:08:25.532377   72639 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1014 15:08:25.532418   72639 kubeadm.go:310] 
	I1014 15:08:25.532543   72639 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 15:08:25.532640   72639 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1014 15:08:25.532742   72639 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1014 15:08:25.532833   72639 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1014 15:08:25.532870   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 15:08:31.003635   72639 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.470741012s)
	I1014 15:08:31.003724   72639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:08:31.018666   72639 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:08:31.029707   72639 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:08:31.029729   72639 kubeadm.go:157] found existing configuration files:
	
	I1014 15:08:31.029776   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:08:31.039554   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:08:31.039625   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:08:31.049748   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:08:31.059618   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:08:31.059682   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:08:31.069369   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:08:31.078321   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:08:31.078385   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:08:31.088006   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:08:31.096681   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:08:31.096742   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:08:31.106269   72639 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 15:08:31.182768   72639 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1014 15:08:31.182833   72639 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 15:08:31.341660   72639 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 15:08:31.341833   72639 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 15:08:31.342008   72639 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1014 15:08:31.538731   72639 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 15:08:31.540933   72639 out.go:235]   - Generating certificates and keys ...
	I1014 15:08:31.541037   72639 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 15:08:31.541124   72639 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 15:08:31.541270   72639 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 15:08:31.541386   72639 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1014 15:08:31.541486   72639 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 15:08:31.541559   72639 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1014 15:08:31.541663   72639 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1014 15:08:31.541750   72639 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1014 15:08:31.542000   72639 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 15:08:31.542534   72639 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 15:08:31.542627   72639 kubeadm.go:310] [certs] Using the existing "sa" key
	I1014 15:08:31.542711   72639 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 15:08:31.847005   72639 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 15:08:32.049586   72639 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 15:08:32.355652   72639 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 15:08:32.511031   72639 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 15:08:32.526310   72639 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 15:08:32.526755   72639 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 15:08:32.526841   72639 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 15:08:32.665898   72639 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 15:08:32.667688   72639 out.go:235]   - Booting up control plane ...
	I1014 15:08:32.667806   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 15:08:32.681232   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 15:08:32.682929   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 15:08:32.683704   72639 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 15:08:32.685936   72639 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1014 15:09:12.687998   72639 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1014 15:09:12.688248   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:09:12.688517   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:09:17.689026   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:09:17.689213   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:09:27.689821   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:09:27.690119   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:09:47.690936   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:09:47.691185   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:10:27.691438   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:10:27.691721   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:10:27.691744   72639 kubeadm.go:310] 
	I1014 15:10:27.691779   72639 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1014 15:10:27.691847   72639 kubeadm.go:310] 		timed out waiting for the condition
	I1014 15:10:27.691867   72639 kubeadm.go:310] 
	I1014 15:10:27.691907   72639 kubeadm.go:310] 	This error is likely caused by:
	I1014 15:10:27.691972   72639 kubeadm.go:310] 		- The kubelet is not running
	I1014 15:10:27.692124   72639 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1014 15:10:27.692136   72639 kubeadm.go:310] 
	I1014 15:10:27.692253   72639 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1014 15:10:27.692311   72639 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1014 15:10:27.692352   72639 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1014 15:10:27.692363   72639 kubeadm.go:310] 
	I1014 15:10:27.692497   72639 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1014 15:10:27.692617   72639 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 15:10:27.692633   72639 kubeadm.go:310] 
	I1014 15:10:27.692787   72639 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1014 15:10:27.692915   72639 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 15:10:27.693051   72639 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1014 15:10:27.693146   72639 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1014 15:10:27.693158   72639 kubeadm.go:310] 
	I1014 15:10:27.693497   72639 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 15:10:27.693627   72639 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1014 15:10:27.693710   72639 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1014 15:10:27.693770   72639 kubeadm.go:394] duration metric: took 8m7.905137486s to StartCluster
	I1014 15:10:27.693808   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:10:27.693863   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:10:27.735373   72639 cri.go:89] found id: ""
	I1014 15:10:27.735410   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.735419   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:10:27.735425   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:10:27.735484   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:10:27.775691   72639 cri.go:89] found id: ""
	I1014 15:10:27.775713   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.775721   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:10:27.775727   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:10:27.775778   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:10:27.811621   72639 cri.go:89] found id: ""
	I1014 15:10:27.811645   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.811653   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:10:27.811658   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:10:27.811718   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:10:27.850894   72639 cri.go:89] found id: ""
	I1014 15:10:27.850917   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.850925   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:10:27.850931   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:10:27.850979   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:10:27.891559   72639 cri.go:89] found id: ""
	I1014 15:10:27.891596   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.891608   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:10:27.891616   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:10:27.891671   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:10:27.929896   72639 cri.go:89] found id: ""
	I1014 15:10:27.929929   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.929942   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:10:27.930002   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:10:27.930096   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:10:27.964801   72639 cri.go:89] found id: ""
	I1014 15:10:27.964828   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.964839   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:10:27.964845   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:10:27.964905   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:10:28.011737   72639 cri.go:89] found id: ""
	I1014 15:10:28.011761   72639 logs.go:282] 0 containers: []
	W1014 15:10:28.011769   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:10:28.011777   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:10:28.011788   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:10:28.088053   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:10:28.088082   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:10:28.088098   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:10:28.214495   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:10:28.214531   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:10:28.254766   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:10:28.254796   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:10:28.304942   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:10:28.304977   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1014 15:10:28.319674   72639 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1014 15:10:28.319729   72639 out.go:270] * 
	W1014 15:10:28.319783   72639 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 15:10:28.319802   72639 out.go:270] * 
	W1014 15:10:28.320716   72639 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 15:10:28.324551   72639 out.go:201] 
	W1014 15:10:28.325905   72639 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 15:10:28.325940   72639 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1014 15:10:28.325985   72639 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1014 15:10:28.327473   72639 out.go:201] 
	
	
	==> CRI-O <==
	Oct 14 15:19:33 old-k8s-version-399767 crio[635]: time="2024-10-14 15:19:33.568416672Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919173568389732,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=297529a3-3186-4981-92a6-b2e56d8db9b2 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:19:33 old-k8s-version-399767 crio[635]: time="2024-10-14 15:19:33.569022771Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=82d6dbe9-0ca9-48f6-933b-a4fcb39568e1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:19:33 old-k8s-version-399767 crio[635]: time="2024-10-14 15:19:33.569088628Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=82d6dbe9-0ca9-48f6-933b-a4fcb39568e1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:19:33 old-k8s-version-399767 crio[635]: time="2024-10-14 15:19:33.569133265Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=82d6dbe9-0ca9-48f6-933b-a4fcb39568e1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:19:33 old-k8s-version-399767 crio[635]: time="2024-10-14 15:19:33.601807250Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d152f81f-5eee-4d68-90e3-b0b19a5695db name=/runtime.v1.RuntimeService/Version
	Oct 14 15:19:33 old-k8s-version-399767 crio[635]: time="2024-10-14 15:19:33.601899880Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d152f81f-5eee-4d68-90e3-b0b19a5695db name=/runtime.v1.RuntimeService/Version
	Oct 14 15:19:33 old-k8s-version-399767 crio[635]: time="2024-10-14 15:19:33.603270763Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=00c05d11-d663-4962-b9fd-13a304b767cd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:19:33 old-k8s-version-399767 crio[635]: time="2024-10-14 15:19:33.603734550Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919173603704152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=00c05d11-d663-4962-b9fd-13a304b767cd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:19:33 old-k8s-version-399767 crio[635]: time="2024-10-14 15:19:33.604225295Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=03435c3b-5a65-432e-942b-23ef991fbfec name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:19:33 old-k8s-version-399767 crio[635]: time="2024-10-14 15:19:33.604277918Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=03435c3b-5a65-432e-942b-23ef991fbfec name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:19:33 old-k8s-version-399767 crio[635]: time="2024-10-14 15:19:33.604357233Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=03435c3b-5a65-432e-942b-23ef991fbfec name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:19:33 old-k8s-version-399767 crio[635]: time="2024-10-14 15:19:33.637147714Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=599ceb81-41bc-4ba3-9c08-9eb058470a57 name=/runtime.v1.RuntimeService/Version
	Oct 14 15:19:33 old-k8s-version-399767 crio[635]: time="2024-10-14 15:19:33.637242484Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=599ceb81-41bc-4ba3-9c08-9eb058470a57 name=/runtime.v1.RuntimeService/Version
	Oct 14 15:19:33 old-k8s-version-399767 crio[635]: time="2024-10-14 15:19:33.638728243Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=37b07946-27fb-4b24-b1bc-c374f40366a0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:19:33 old-k8s-version-399767 crio[635]: time="2024-10-14 15:19:33.639127676Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919173639101514,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=37b07946-27fb-4b24-b1bc-c374f40366a0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:19:33 old-k8s-version-399767 crio[635]: time="2024-10-14 15:19:33.639721434Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=92ec91cc-8942-458a-a592-07218ecd63b7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:19:33 old-k8s-version-399767 crio[635]: time="2024-10-14 15:19:33.639793394Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=92ec91cc-8942-458a-a592-07218ecd63b7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:19:33 old-k8s-version-399767 crio[635]: time="2024-10-14 15:19:33.639826141Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=92ec91cc-8942-458a-a592-07218ecd63b7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:19:33 old-k8s-version-399767 crio[635]: time="2024-10-14 15:19:33.672206087Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d5e147b4-b52f-4c6a-bd45-b4f087870d91 name=/runtime.v1.RuntimeService/Version
	Oct 14 15:19:33 old-k8s-version-399767 crio[635]: time="2024-10-14 15:19:33.672279819Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d5e147b4-b52f-4c6a-bd45-b4f087870d91 name=/runtime.v1.RuntimeService/Version
	Oct 14 15:19:33 old-k8s-version-399767 crio[635]: time="2024-10-14 15:19:33.673616568Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f1007d86-11fb-4201-8f2c-27f61ddb2c35 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:19:33 old-k8s-version-399767 crio[635]: time="2024-10-14 15:19:33.673974322Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919173673955036,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f1007d86-11fb-4201-8f2c-27f61ddb2c35 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:19:33 old-k8s-version-399767 crio[635]: time="2024-10-14 15:19:33.674573574Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cac583e3-e532-4cd9-b56b-d9d5ee336dd4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:19:33 old-k8s-version-399767 crio[635]: time="2024-10-14 15:19:33.674620774Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cac583e3-e532-4cd9-b56b-d9d5ee336dd4 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:19:33 old-k8s-version-399767 crio[635]: time="2024-10-14 15:19:33.674649586Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=cac583e3-e532-4cd9-b56b-d9d5ee336dd4 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct14 15:01] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052051] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.050116] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Oct14 15:02] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.605075] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.701901] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.221397] systemd-fstab-generator[560]: Ignoring "noauto" option for root device
	[  +0.058897] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064336] systemd-fstab-generator[573]: Ignoring "noauto" option for root device
	[  +0.225460] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.166157] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.271984] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +6.642881] systemd-fstab-generator[879]: Ignoring "noauto" option for root device
	[  +0.070885] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.471808] systemd-fstab-generator[1003]: Ignoring "noauto" option for root device
	[ +13.079512] kauditd_printk_skb: 46 callbacks suppressed
	[Oct14 15:06] systemd-fstab-generator[5074]: Ignoring "noauto" option for root device
	[Oct14 15:08] systemd-fstab-generator[5361]: Ignoring "noauto" option for root device
	[  +0.073672] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 15:19:33 up 17 min,  0 users,  load average: 0.01, 0.03, 0.00
	Linux old-k8s-version-399767 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 14 15:19:28 old-k8s-version-399767 kubelet[6548]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Oct 14 15:19:28 old-k8s-version-399767 kubelet[6548]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Oct 14 15:19:28 old-k8s-version-399767 kubelet[6548]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Oct 14 15:19:28 old-k8s-version-399767 kubelet[6548]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0001916f0)
	Oct 14 15:19:28 old-k8s-version-399767 kubelet[6548]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Oct 14 15:19:28 old-k8s-version-399767 kubelet[6548]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000955ef0, 0x4f0ac20, 0xc000b86af0, 0x1, 0xc0001000c0)
	Oct 14 15:19:28 old-k8s-version-399767 kubelet[6548]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Oct 14 15:19:28 old-k8s-version-399767 kubelet[6548]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0000e0540, 0xc0001000c0)
	Oct 14 15:19:28 old-k8s-version-399767 kubelet[6548]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Oct 14 15:19:28 old-k8s-version-399767 kubelet[6548]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Oct 14 15:19:28 old-k8s-version-399767 kubelet[6548]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Oct 14 15:19:28 old-k8s-version-399767 kubelet[6548]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0003613f0, 0xc000b7edc0)
	Oct 14 15:19:28 old-k8s-version-399767 kubelet[6548]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Oct 14 15:19:28 old-k8s-version-399767 kubelet[6548]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Oct 14 15:19:28 old-k8s-version-399767 kubelet[6548]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Oct 14 15:19:28 old-k8s-version-399767 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 14 15:19:28 old-k8s-version-399767 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Oct 14 15:19:28 old-k8s-version-399767 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Oct 14 15:19:28 old-k8s-version-399767 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Oct 14 15:19:28 old-k8s-version-399767 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Oct 14 15:19:28 old-k8s-version-399767 kubelet[6556]: I1014 15:19:28.907270    6556 server.go:416] Version: v1.20.0
	Oct 14 15:19:28 old-k8s-version-399767 kubelet[6556]: I1014 15:19:28.907595    6556 server.go:837] Client rotation is on, will bootstrap in background
	Oct 14 15:19:28 old-k8s-version-399767 kubelet[6556]: I1014 15:19:28.909683    6556 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Oct 14 15:19:28 old-k8s-version-399767 kubelet[6556]: W1014 15:19:28.910625    6556 manager.go:159] Cannot detect current cgroup on cgroup v2
	Oct 14 15:19:28 old-k8s-version-399767 kubelet[6556]: I1014 15:19:28.911347    6556 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-399767 -n old-k8s-version-399767
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-399767 -n old-k8s-version-399767: exit status 2 (248.814767ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-399767" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (478.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-201291 -n default-k8s-diff-port-201291
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-14 15:23:35.325709356 +0000 UTC m=+6299.607057685
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-201291 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-201291 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.324µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-201291 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-201291 -n default-k8s-diff-port-201291
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-201291 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-201291 logs -n 25: (1.201769495s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p default-k8s-diff-port-201291  | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:55 UTC | 14 Oct 24 14:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:55 UTC |                     |
	|         | default-k8s-diff-port-201291                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-813300                  | no-preload-813300            | jenkins | v1.34.0 | 14 Oct 24 14:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-813300                                   | no-preload-813300            | jenkins | v1.34.0 | 14 Oct 24 14:56 UTC | 14 Oct 24 15:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-399767        | old-k8s-version-399767       | jenkins | v1.34.0 | 14 Oct 24 14:56 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-989166                 | embed-certs-989166           | jenkins | v1.34.0 | 14 Oct 24 14:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-989166                                  | embed-certs-989166           | jenkins | v1.34.0 | 14 Oct 24 14:57 UTC | 14 Oct 24 15:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-201291       | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:57 UTC | 14 Oct 24 15:06 UTC |
	|         | default-k8s-diff-port-201291                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-399767                              | old-k8s-version-399767       | jenkins | v1.34.0 | 14 Oct 24 14:58 UTC | 14 Oct 24 14:58 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-399767             | old-k8s-version-399767       | jenkins | v1.34.0 | 14 Oct 24 14:58 UTC | 14 Oct 24 14:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-399767                              | old-k8s-version-399767       | jenkins | v1.34.0 | 14 Oct 24 14:58 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-399767                              | old-k8s-version-399767       | jenkins | v1.34.0 | 14 Oct 24 15:21 UTC | 14 Oct 24 15:21 UTC |
	| start   | -p newest-cni-870289 --memory=2200 --alsologtostderr   | newest-cni-870289            | jenkins | v1.34.0 | 14 Oct 24 15:21 UTC | 14 Oct 24 15:22 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-813300                                   | no-preload-813300            | jenkins | v1.34.0 | 14 Oct 24 15:22 UTC | 14 Oct 24 15:22 UTC |
	| addons  | enable metrics-server -p newest-cni-870289             | newest-cni-870289            | jenkins | v1.34.0 | 14 Oct 24 15:22 UTC | 14 Oct 24 15:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-870289                                   | newest-cni-870289            | jenkins | v1.34.0 | 14 Oct 24 15:22 UTC | 14 Oct 24 15:22 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-870289                  | newest-cni-870289            | jenkins | v1.34.0 | 14 Oct 24 15:22 UTC | 14 Oct 24 15:22 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-870289 --memory=2200 --alsologtostderr   | newest-cni-870289            | jenkins | v1.34.0 | 14 Oct 24 15:22 UTC | 14 Oct 24 15:23 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p embed-certs-989166                                  | embed-certs-989166           | jenkins | v1.34.0 | 14 Oct 24 15:23 UTC | 14 Oct 24 15:23 UTC |
	| image   | newest-cni-870289 image list                           | newest-cni-870289            | jenkins | v1.34.0 | 14 Oct 24 15:23 UTC | 14 Oct 24 15:23 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-870289                                   | newest-cni-870289            | jenkins | v1.34.0 | 14 Oct 24 15:23 UTC | 14 Oct 24 15:23 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-870289                                   | newest-cni-870289            | jenkins | v1.34.0 | 14 Oct 24 15:23 UTC | 14 Oct 24 15:23 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-870289                                   | newest-cni-870289            | jenkins | v1.34.0 | 14 Oct 24 15:23 UTC | 14 Oct 24 15:23 UTC |
	| delete  | -p newest-cni-870289                                   | newest-cni-870289            | jenkins | v1.34.0 | 14 Oct 24 15:23 UTC | 14 Oct 24 15:23 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 15:22:50
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 15:22:50.242436   80033 out.go:345] Setting OutFile to fd 1 ...
	I1014 15:22:50.242564   80033 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 15:22:50.242574   80033 out.go:358] Setting ErrFile to fd 2...
	I1014 15:22:50.242580   80033 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 15:22:50.242878   80033 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
	I1014 15:22:50.243414   80033 out.go:352] Setting JSON to false
	I1014 15:22:50.244264   80033 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7520,"bootTime":1728911850,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 15:22:50.244362   80033 start.go:139] virtualization: kvm guest
	I1014 15:22:50.246493   80033 out.go:177] * [newest-cni-870289] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1014 15:22:50.248081   80033 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 15:22:50.248114   80033 notify.go:220] Checking for updates...
	I1014 15:22:50.250426   80033 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 15:22:50.251684   80033 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 15:22:50.252816   80033 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 15:22:50.253913   80033 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 15:22:50.255087   80033 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 15:22:50.256910   80033 config.go:182] Loaded profile config "newest-cni-870289": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 15:22:50.257349   80033 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:22:50.257401   80033 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:22:50.273158   80033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42101
	I1014 15:22:50.273668   80033 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:22:50.274247   80033 main.go:141] libmachine: Using API Version  1
	I1014 15:22:50.274267   80033 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:22:50.274732   80033 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:22:50.274949   80033 main.go:141] libmachine: (newest-cni-870289) Calling .DriverName
	I1014 15:22:50.275246   80033 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 15:22:50.275664   80033 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:22:50.275741   80033 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:22:50.289988   80033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45831
	I1014 15:22:50.290297   80033 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:22:50.290752   80033 main.go:141] libmachine: Using API Version  1
	I1014 15:22:50.290775   80033 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:22:50.291064   80033 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:22:50.291255   80033 main.go:141] libmachine: (newest-cni-870289) Calling .DriverName
	I1014 15:22:50.325994   80033 out.go:177] * Using the kvm2 driver based on existing profile
	I1014 15:22:50.327400   80033 start.go:297] selected driver: kvm2
	I1014 15:22:50.327414   80033 start.go:901] validating driver "kvm2" against &{Name:newest-cni-870289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:newest-cni-870289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.98 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] Sta
rtHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 15:22:50.327507   80033 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 15:22:50.328209   80033 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 15:22:50.328312   80033 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19790-7836/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 15:22:50.343812   80033 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1014 15:22:50.344268   80033 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1014 15:22:50.344309   80033 cni.go:84] Creating CNI manager for ""
	I1014 15:22:50.344374   80033 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:22:50.344435   80033 start.go:340] cluster config:
	{Name:newest-cni-870289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-870289 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.98 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:
Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 15:22:50.344552   80033 iso.go:125] acquiring lock: {Name:mk2e2e780b05ead4007a93e6b56d28c6081926bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 15:22:50.346535   80033 out.go:177] * Starting "newest-cni-870289" primary control-plane node in "newest-cni-870289" cluster
	I1014 15:22:50.347981   80033 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 15:22:50.348025   80033 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1014 15:22:50.348036   80033 cache.go:56] Caching tarball of preloaded images
	I1014 15:22:50.348131   80033 preload.go:172] Found /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 15:22:50.348144   80033 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1014 15:22:50.348252   80033 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289/config.json ...
	I1014 15:22:50.348479   80033 start.go:360] acquireMachinesLock for newest-cni-870289: {Name:mk0b928e3a7c606d1470b2064477a22c5e59969d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 15:22:50.348534   80033 start.go:364] duration metric: took 34.27µs to acquireMachinesLock for "newest-cni-870289"
	I1014 15:22:50.348554   80033 start.go:96] Skipping create...Using existing machine configuration
	I1014 15:22:50.348563   80033 fix.go:54] fixHost starting: 
	I1014 15:22:50.348833   80033 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:22:50.348886   80033 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:22:50.363200   80033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32819
	I1014 15:22:50.363690   80033 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:22:50.364189   80033 main.go:141] libmachine: Using API Version  1
	I1014 15:22:50.364208   80033 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:22:50.364519   80033 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:22:50.364711   80033 main.go:141] libmachine: (newest-cni-870289) Calling .DriverName
	I1014 15:22:50.364849   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetState
	I1014 15:22:50.366406   80033 fix.go:112] recreateIfNeeded on newest-cni-870289: state=Stopped err=<nil>
	I1014 15:22:50.366431   80033 main.go:141] libmachine: (newest-cni-870289) Calling .DriverName
	W1014 15:22:50.366576   80033 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 15:22:50.368610   80033 out.go:177] * Restarting existing kvm2 VM for "newest-cni-870289" ...
	I1014 15:22:50.369973   80033 main.go:141] libmachine: (newest-cni-870289) Calling .Start
	I1014 15:22:50.370176   80033 main.go:141] libmachine: (newest-cni-870289) Ensuring networks are active...
	I1014 15:22:50.371043   80033 main.go:141] libmachine: (newest-cni-870289) Ensuring network default is active
	I1014 15:22:50.371365   80033 main.go:141] libmachine: (newest-cni-870289) Ensuring network mk-newest-cni-870289 is active
	I1014 15:22:50.371756   80033 main.go:141] libmachine: (newest-cni-870289) Getting domain xml...
	I1014 15:22:50.372450   80033 main.go:141] libmachine: (newest-cni-870289) Creating domain...
	I1014 15:22:51.608832   80033 main.go:141] libmachine: (newest-cni-870289) Waiting to get IP...
	I1014 15:22:51.609871   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:51.610285   80033 main.go:141] libmachine: (newest-cni-870289) DBG | unable to find current IP address of domain newest-cni-870289 in network mk-newest-cni-870289
	I1014 15:22:51.610374   80033 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:22:51.610290   80068 retry.go:31] will retry after 225.531686ms: waiting for machine to come up
	I1014 15:22:51.837899   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:51.838389   80033 main.go:141] libmachine: (newest-cni-870289) DBG | unable to find current IP address of domain newest-cni-870289 in network mk-newest-cni-870289
	I1014 15:22:51.838413   80033 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:22:51.838337   80068 retry.go:31] will retry after 320.099873ms: waiting for machine to come up
	I1014 15:22:52.159722   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:52.160196   80033 main.go:141] libmachine: (newest-cni-870289) DBG | unable to find current IP address of domain newest-cni-870289 in network mk-newest-cni-870289
	I1014 15:22:52.160214   80033 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:22:52.160162   80068 retry.go:31] will retry after 366.320676ms: waiting for machine to come up
	I1014 15:22:52.527657   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:52.528083   80033 main.go:141] libmachine: (newest-cni-870289) DBG | unable to find current IP address of domain newest-cni-870289 in network mk-newest-cni-870289
	I1014 15:22:52.528130   80033 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:22:52.528054   80068 retry.go:31] will retry after 506.276838ms: waiting for machine to come up
	I1014 15:22:53.035693   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:53.036224   80033 main.go:141] libmachine: (newest-cni-870289) DBG | unable to find current IP address of domain newest-cni-870289 in network mk-newest-cni-870289
	I1014 15:22:53.036247   80033 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:22:53.036163   80068 retry.go:31] will retry after 601.197956ms: waiting for machine to come up
	I1014 15:22:53.638867   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:53.639380   80033 main.go:141] libmachine: (newest-cni-870289) DBG | unable to find current IP address of domain newest-cni-870289 in network mk-newest-cni-870289
	I1014 15:22:53.639405   80033 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:22:53.639336   80068 retry.go:31] will retry after 806.198335ms: waiting for machine to come up
	I1014 15:22:54.446655   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:54.447106   80033 main.go:141] libmachine: (newest-cni-870289) DBG | unable to find current IP address of domain newest-cni-870289 in network mk-newest-cni-870289
	I1014 15:22:54.447136   80033 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:22:54.447054   80068 retry.go:31] will retry after 774.90593ms: waiting for machine to come up
	I1014 15:22:55.224109   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:55.224499   80033 main.go:141] libmachine: (newest-cni-870289) DBG | unable to find current IP address of domain newest-cni-870289 in network mk-newest-cni-870289
	I1014 15:22:55.224529   80033 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:22:55.224464   80068 retry.go:31] will retry after 1.132731616s: waiting for machine to come up
	I1014 15:22:56.358972   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:56.359328   80033 main.go:141] libmachine: (newest-cni-870289) DBG | unable to find current IP address of domain newest-cni-870289 in network mk-newest-cni-870289
	I1014 15:22:56.359381   80033 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:22:56.359280   80068 retry.go:31] will retry after 1.296460105s: waiting for machine to come up
	I1014 15:22:57.657787   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:57.658226   80033 main.go:141] libmachine: (newest-cni-870289) DBG | unable to find current IP address of domain newest-cni-870289 in network mk-newest-cni-870289
	I1014 15:22:57.658249   80033 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:22:57.658187   80068 retry.go:31] will retry after 1.922384977s: waiting for machine to come up
	I1014 15:22:59.583317   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:59.583698   80033 main.go:141] libmachine: (newest-cni-870289) DBG | unable to find current IP address of domain newest-cni-870289 in network mk-newest-cni-870289
	I1014 15:22:59.583730   80033 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:22:59.583657   80068 retry.go:31] will retry after 2.451802219s: waiting for machine to come up
	I1014 15:23:02.037037   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:02.037421   80033 main.go:141] libmachine: (newest-cni-870289) DBG | unable to find current IP address of domain newest-cni-870289 in network mk-newest-cni-870289
	I1014 15:23:02.037447   80033 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:23:02.037362   80068 retry.go:31] will retry after 3.287657218s: waiting for machine to come up
	I1014 15:23:05.328784   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:05.329242   80033 main.go:141] libmachine: (newest-cni-870289) DBG | unable to find current IP address of domain newest-cni-870289 in network mk-newest-cni-870289
	I1014 15:23:05.329268   80033 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:23:05.329212   80068 retry.go:31] will retry after 3.443295733s: waiting for machine to come up
	I1014 15:23:08.776298   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:08.776859   80033 main.go:141] libmachine: (newest-cni-870289) Found IP for machine: 192.168.72.98
	I1014 15:23:08.776889   80033 main.go:141] libmachine: (newest-cni-870289) Reserving static IP address...
	I1014 15:23:08.776904   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has current primary IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:08.777369   80033 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "newest-cni-870289", mac: "52:54:00:7d:a1:9e", ip: "192.168.72.98"} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:23:01 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:23:08.777411   80033 main.go:141] libmachine: (newest-cni-870289) Reserved static IP address: 192.168.72.98
	I1014 15:23:08.777429   80033 main.go:141] libmachine: (newest-cni-870289) DBG | skip adding static IP to network mk-newest-cni-870289 - found existing host DHCP lease matching {name: "newest-cni-870289", mac: "52:54:00:7d:a1:9e", ip: "192.168.72.98"}
	I1014 15:23:08.777438   80033 main.go:141] libmachine: (newest-cni-870289) Waiting for SSH to be available...
	I1014 15:23:08.777447   80033 main.go:141] libmachine: (newest-cni-870289) DBG | Getting to WaitForSSH function...
	I1014 15:23:08.779826   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:08.780226   80033 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:23:01 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:23:08.780255   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:08.780369   80033 main.go:141] libmachine: (newest-cni-870289) DBG | Using SSH client type: external
	I1014 15:23:08.780420   80033 main.go:141] libmachine: (newest-cni-870289) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/newest-cni-870289/id_rsa (-rw-------)
	I1014 15:23:08.780449   80033 main.go:141] libmachine: (newest-cni-870289) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.98 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/newest-cni-870289/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 15:23:08.780469   80033 main.go:141] libmachine: (newest-cni-870289) DBG | About to run SSH command:
	I1014 15:23:08.780487   80033 main.go:141] libmachine: (newest-cni-870289) DBG | exit 0
	I1014 15:23:08.906756   80033 main.go:141] libmachine: (newest-cni-870289) DBG | SSH cmd err, output: <nil>: 
	I1014 15:23:08.907078   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetConfigRaw
	I1014 15:23:08.907857   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetIP
	I1014 15:23:08.910201   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:08.910565   80033 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:23:01 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:23:08.910592   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:08.910798   80033 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289/config.json ...
	I1014 15:23:08.910967   80033 machine.go:93] provisionDockerMachine start ...
	I1014 15:23:08.910983   80033 main.go:141] libmachine: (newest-cni-870289) Calling .DriverName
	I1014 15:23:08.911192   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHHostname
	I1014 15:23:08.913226   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:08.913551   80033 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:23:01 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:23:08.913578   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:08.913711   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHPort
	I1014 15:23:08.913857   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:23:08.913966   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:23:08.914084   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHUsername
	I1014 15:23:08.914249   80033 main.go:141] libmachine: Using SSH client type: native
	I1014 15:23:08.914423   80033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.98 22 <nil> <nil>}
	I1014 15:23:08.914433   80033 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 15:23:09.027295   80033 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 15:23:09.027321   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetMachineName
	I1014 15:23:09.027600   80033 buildroot.go:166] provisioning hostname "newest-cni-870289"
	I1014 15:23:09.027626   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetMachineName
	I1014 15:23:09.027830   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHHostname
	I1014 15:23:09.030655   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:09.031085   80033 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:23:01 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:23:09.031122   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:09.031278   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHPort
	I1014 15:23:09.031472   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:23:09.031619   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:23:09.031752   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHUsername
	I1014 15:23:09.031881   80033 main.go:141] libmachine: Using SSH client type: native
	I1014 15:23:09.032076   80033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.98 22 <nil> <nil>}
	I1014 15:23:09.032089   80033 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-870289 && echo "newest-cni-870289" | sudo tee /etc/hostname
	I1014 15:23:09.157584   80033 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-870289
	
	I1014 15:23:09.157624   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHHostname
	I1014 15:23:09.160430   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:09.160774   80033 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:23:01 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:23:09.160806   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:09.160981   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHPort
	I1014 15:23:09.161137   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:23:09.161302   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:23:09.161462   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHUsername
	I1014 15:23:09.161633   80033 main.go:141] libmachine: Using SSH client type: native
	I1014 15:23:09.161863   80033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.98 22 <nil> <nil>}
	I1014 15:23:09.161888   80033 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-870289' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-870289/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-870289' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 15:23:09.284868   80033 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 15:23:09.284902   80033 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 15:23:09.284979   80033 buildroot.go:174] setting up certificates
	I1014 15:23:09.284995   80033 provision.go:84] configureAuth start
	I1014 15:23:09.285018   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetMachineName
	I1014 15:23:09.285294   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetIP
	I1014 15:23:09.287889   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:09.288168   80033 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:23:01 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:23:09.288202   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:09.288382   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHHostname
	I1014 15:23:09.290615   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:09.290796   80033 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:23:01 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:23:09.290817   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:09.290997   80033 provision.go:143] copyHostCerts
	I1014 15:23:09.291069   80033 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 15:23:09.291111   80033 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 15:23:09.291208   80033 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 15:23:09.291355   80033 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 15:23:09.291369   80033 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 15:23:09.291417   80033 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 15:23:09.291542   80033 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 15:23:09.291552   80033 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 15:23:09.291593   80033 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 15:23:09.291691   80033 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.newest-cni-870289 san=[127.0.0.1 192.168.72.98 localhost minikube newest-cni-870289]
	I1014 15:23:09.713207   80033 provision.go:177] copyRemoteCerts
	I1014 15:23:09.713270   80033 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 15:23:09.713298   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHHostname
	I1014 15:23:09.716175   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:09.716552   80033 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:23:01 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:23:09.716585   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:09.716750   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHPort
	I1014 15:23:09.716946   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:23:09.717090   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHUsername
	I1014 15:23:09.717202   80033 sshutil.go:53] new ssh client: &{IP:192.168.72.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/newest-cni-870289/id_rsa Username:docker}
	I1014 15:23:09.805363   80033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 15:23:09.831107   80033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1014 15:23:09.856337   80033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 15:23:09.881208   80033 provision.go:87] duration metric: took 596.195147ms to configureAuth
	I1014 15:23:09.881238   80033 buildroot.go:189] setting minikube options for container-runtime
	I1014 15:23:09.881466   80033 config.go:182] Loaded profile config "newest-cni-870289": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 15:23:09.881570   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHHostname
	I1014 15:23:09.884576   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:09.884921   80033 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:23:01 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:23:09.884951   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:09.885133   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHPort
	I1014 15:23:09.885365   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:23:09.885553   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:23:09.885775   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHUsername
	I1014 15:23:09.885972   80033 main.go:141] libmachine: Using SSH client type: native
	I1014 15:23:09.886148   80033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.98 22 <nil> <nil>}
	I1014 15:23:09.886162   80033 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 15:23:10.123349   80033 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 15:23:10.123391   80033 machine.go:96] duration metric: took 1.212412253s to provisionDockerMachine
	I1014 15:23:10.123406   80033 start.go:293] postStartSetup for "newest-cni-870289" (driver="kvm2")
	I1014 15:23:10.123419   80033 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 15:23:10.123440   80033 main.go:141] libmachine: (newest-cni-870289) Calling .DriverName
	I1014 15:23:10.123764   80033 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 15:23:10.123808   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHHostname
	I1014 15:23:10.126259   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:10.126680   80033 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:23:01 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:23:10.126711   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:10.126852   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHPort
	I1014 15:23:10.127033   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:23:10.127276   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHUsername
	I1014 15:23:10.127506   80033 sshutil.go:53] new ssh client: &{IP:192.168.72.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/newest-cni-870289/id_rsa Username:docker}
	I1014 15:23:10.214103   80033 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 15:23:10.218961   80033 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 15:23:10.218990   80033 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 15:23:10.219057   80033 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 15:23:10.219144   80033 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 15:23:10.219266   80033 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 15:23:10.230319   80033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:23:10.257243   80033 start.go:296] duration metric: took 133.82151ms for postStartSetup
	I1014 15:23:10.257289   80033 fix.go:56] duration metric: took 19.908725044s for fixHost
	I1014 15:23:10.257313   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHHostname
	I1014 15:23:10.259886   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:10.260410   80033 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:23:01 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:23:10.260443   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:10.260658   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHPort
	I1014 15:23:10.260830   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:23:10.260980   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:23:10.261082   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHUsername
	I1014 15:23:10.261279   80033 main.go:141] libmachine: Using SSH client type: native
	I1014 15:23:10.261488   80033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.98 22 <nil> <nil>}
	I1014 15:23:10.261503   80033 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 15:23:10.375678   80033 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728919390.332074383
	
	I1014 15:23:10.375703   80033 fix.go:216] guest clock: 1728919390.332074383
	I1014 15:23:10.375712   80033 fix.go:229] Guest: 2024-10-14 15:23:10.332074383 +0000 UTC Remote: 2024-10-14 15:23:10.257294315 +0000 UTC m=+20.053667264 (delta=74.780068ms)
	I1014 15:23:10.375737   80033 fix.go:200] guest clock delta is within tolerance: 74.780068ms
	I1014 15:23:10.375744   80033 start.go:83] releasing machines lock for "newest-cni-870289", held for 20.027197193s
	I1014 15:23:10.375769   80033 main.go:141] libmachine: (newest-cni-870289) Calling .DriverName
	I1014 15:23:10.376026   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetIP
	I1014 15:23:10.378718   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:10.379157   80033 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:23:01 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:23:10.379189   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:10.379361   80033 main.go:141] libmachine: (newest-cni-870289) Calling .DriverName
	I1014 15:23:10.379883   80033 main.go:141] libmachine: (newest-cni-870289) Calling .DriverName
	I1014 15:23:10.380069   80033 main.go:141] libmachine: (newest-cni-870289) Calling .DriverName
	I1014 15:23:10.380139   80033 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 15:23:10.380189   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHHostname
	I1014 15:23:10.380298   80033 ssh_runner.go:195] Run: cat /version.json
	I1014 15:23:10.380319   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHHostname
	I1014 15:23:10.382926   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:10.383042   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:10.383332   80033 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:23:01 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:23:10.383357   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:10.383427   80033 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:23:01 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:23:10.383449   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:10.383509   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHPort
	I1014 15:23:10.383686   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHPort
	I1014 15:23:10.383701   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:23:10.383823   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:23:10.383999   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHUsername
	I1014 15:23:10.384011   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHUsername
	I1014 15:23:10.384129   80033 sshutil.go:53] new ssh client: &{IP:192.168.72.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/newest-cni-870289/id_rsa Username:docker}
	I1014 15:23:10.384164   80033 sshutil.go:53] new ssh client: &{IP:192.168.72.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/newest-cni-870289/id_rsa Username:docker}
	I1014 15:23:10.488006   80033 ssh_runner.go:195] Run: systemctl --version
	I1014 15:23:10.494280   80033 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 15:23:10.647105   80033 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 15:23:10.653807   80033 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 15:23:10.653885   80033 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 15:23:10.670664   80033 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 15:23:10.670703   80033 start.go:495] detecting cgroup driver to use...
	I1014 15:23:10.670771   80033 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 15:23:10.687355   80033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 15:23:10.702155   80033 docker.go:217] disabling cri-docker service (if available) ...
	I1014 15:23:10.702214   80033 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 15:23:10.717420   80033 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 15:23:10.733842   80033 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 15:23:10.851005   80033 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 15:23:10.997057   80033 docker.go:233] disabling docker service ...
	I1014 15:23:10.997132   80033 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 15:23:11.013139   80033 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 15:23:11.026490   80033 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 15:23:11.168341   80033 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 15:23:11.299111   80033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 15:23:11.313239   80033 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 15:23:11.333046   80033 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 15:23:11.333116   80033 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:23:11.344125   80033 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 15:23:11.344197   80033 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:23:11.355784   80033 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:23:11.367551   80033 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:23:11.379112   80033 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 15:23:11.390535   80033 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:23:11.401644   80033 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:23:11.419775   80033 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:23:11.431426   80033 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 15:23:11.444870   80033 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 15:23:11.444933   80033 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 15:23:11.464849   80033 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 15:23:11.478475   80033 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:23:11.609186   80033 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 15:23:11.698219   80033 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 15:23:11.698306   80033 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 15:23:11.704195   80033 start.go:563] Will wait 60s for crictl version
	I1014 15:23:11.704251   80033 ssh_runner.go:195] Run: which crictl
	I1014 15:23:11.708221   80033 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 15:23:11.748056   80033 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 15:23:11.748164   80033 ssh_runner.go:195] Run: crio --version
	I1014 15:23:11.775490   80033 ssh_runner.go:195] Run: crio --version
	I1014 15:23:11.807450   80033 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1014 15:23:11.808708   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetIP
	I1014 15:23:11.811426   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:11.811929   80033 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:23:01 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:23:11.811972   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:11.812255   80033 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1014 15:23:11.816615   80033 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:23:11.831666   80033 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1014 15:23:11.832932   80033 kubeadm.go:883] updating cluster {Name:newest-cni-870289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:newest-cni-870289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.98 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6
m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 15:23:11.833062   80033 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 15:23:11.833131   80033 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:23:11.870761   80033 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1014 15:23:11.870822   80033 ssh_runner.go:195] Run: which lz4
	I1014 15:23:11.875031   80033 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 15:23:11.879450   80033 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 15:23:11.879488   80033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1014 15:23:13.247774   80033 crio.go:462] duration metric: took 1.372771401s to copy over tarball
	I1014 15:23:13.247860   80033 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 15:23:15.503945   80033 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.256049322s)
	I1014 15:23:15.503987   80033 crio.go:469] duration metric: took 2.256180101s to extract the tarball
	I1014 15:23:15.503997   80033 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 15:23:15.542478   80033 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:23:15.592608   80033 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 15:23:15.592629   80033 cache_images.go:84] Images are preloaded, skipping loading
	I1014 15:23:15.592639   80033 kubeadm.go:934] updating node { 192.168.72.98 8443 v1.31.1 crio true true} ...
	I1014 15:23:15.592860   80033 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-870289 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.98
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:newest-cni-870289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 15:23:15.592956   80033 ssh_runner.go:195] Run: crio config
	I1014 15:23:15.654529   80033 cni.go:84] Creating CNI manager for ""
	I1014 15:23:15.654552   80033 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:23:15.654564   80033 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1014 15:23:15.654591   80033 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.98 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-870289 NodeName:newest-cni-870289 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.98"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.72.98 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 15:23:15.654765   80033 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.98
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-870289"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.98"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.98"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 15:23:15.654828   80033 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 15:23:15.666018   80033 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 15:23:15.666083   80033 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 15:23:15.676038   80033 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (353 bytes)
	I1014 15:23:15.696205   80033 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 15:23:15.717230   80033 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2484 bytes)
	I1014 15:23:15.738761   80033 ssh_runner.go:195] Run: grep 192.168.72.98	control-plane.minikube.internal$ /etc/hosts
	I1014 15:23:15.742867   80033 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.98	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:23:15.756911   80033 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:23:15.900443   80033 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:23:15.921348   80033 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289 for IP: 192.168.72.98
	I1014 15:23:15.921368   80033 certs.go:194] generating shared ca certs ...
	I1014 15:23:15.921383   80033 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:23:15.921545   80033 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 15:23:15.921613   80033 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 15:23:15.921627   80033 certs.go:256] generating profile certs ...
	I1014 15:23:15.921736   80033 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289/client.key
	I1014 15:23:15.921813   80033 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289/apiserver.key.5e9d2aba
	I1014 15:23:15.921862   80033 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289/proxy-client.key
	I1014 15:23:15.922004   80033 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 15:23:15.922040   80033 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 15:23:15.922054   80033 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 15:23:15.922087   80033 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 15:23:15.922116   80033 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 15:23:15.922155   80033 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 15:23:15.922233   80033 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:23:15.923045   80033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 15:23:15.953124   80033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 15:23:15.980593   80033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 15:23:16.007921   80033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 15:23:16.046724   80033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 15:23:16.079766   80033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 15:23:16.106701   80033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 15:23:16.133440   80033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 15:23:16.173800   80033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 15:23:16.203299   80033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 15:23:16.230496   80033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 15:23:16.256708   80033 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 15:23:16.275211   80033 ssh_runner.go:195] Run: openssl version
	I1014 15:23:16.281192   80033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 15:23:16.291859   80033 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 15:23:16.296208   80033 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 15:23:16.296254   80033 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 15:23:16.301904   80033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 15:23:16.313318   80033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 15:23:16.324029   80033 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 15:23:16.328431   80033 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 15:23:16.328493   80033 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 15:23:16.334052   80033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 15:23:16.344559   80033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 15:23:16.355174   80033 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:23:16.359653   80033 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:23:16.359707   80033 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:23:16.365882   80033 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 15:23:16.378070   80033 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 15:23:16.382654   80033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 15:23:16.388737   80033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 15:23:16.394522   80033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 15:23:16.400424   80033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 15:23:16.407812   80033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 15:23:16.414484   80033 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 15:23:16.420850   80033 kubeadm.go:392] StartCluster: {Name:newest-cni-870289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:newest-cni-870289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.98 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s
ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 15:23:16.420952   80033 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 15:23:16.421031   80033 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:23:16.468169   80033 cri.go:89] found id: ""
	I1014 15:23:16.468265   80033 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 15:23:16.478668   80033 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1014 15:23:16.478690   80033 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1014 15:23:16.478745   80033 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 15:23:16.488298   80033 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 15:23:16.489088   80033 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-870289" does not appear in /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 15:23:16.489556   80033 kubeconfig.go:62] /home/jenkins/minikube-integration/19790-7836/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-870289" cluster setting kubeconfig missing "newest-cni-870289" context setting]
	I1014 15:23:16.490246   80033 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/kubeconfig: {Name:mk17dd47b52fd7a8ee17f563c35b22ef1b7788f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:23:16.491839   80033 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 15:23:16.501767   80033 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.98
	I1014 15:23:16.501800   80033 kubeadm.go:1160] stopping kube-system containers ...
	I1014 15:23:16.501811   80033 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1014 15:23:16.501862   80033 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:23:16.543784   80033 cri.go:89] found id: ""
	I1014 15:23:16.543858   80033 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1014 15:23:16.562052   80033 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:23:16.576342   80033 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:23:16.576358   80033 kubeadm.go:157] found existing configuration files:
	
	I1014 15:23:16.576389   80033 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:23:16.586105   80033 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:23:16.586150   80033 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:23:16.595648   80033 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:23:16.604850   80033 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:23:16.604906   80033 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:23:16.614756   80033 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:23:16.623580   80033 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:23:16.623630   80033 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:23:16.632764   80033 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:23:16.641433   80033 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:23:16.641487   80033 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:23:16.650649   80033 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:23:16.659923   80033 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:23:16.779107   80033 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:23:17.410373   80033 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:23:17.633042   80033 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:23:17.687518   80033 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:23:17.760427   80033 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:23:17.760544   80033 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:23:18.261601   80033 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:23:18.760969   80033 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:23:19.260639   80033 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:23:19.277314   80033 api_server.go:72] duration metric: took 1.516886666s to wait for apiserver process to appear ...
	I1014 15:23:19.277351   80033 api_server.go:88] waiting for apiserver healthz status ...
	I1014 15:23:19.277370   80033 api_server.go:253] Checking apiserver healthz at https://192.168.72.98:8443/healthz ...
	I1014 15:23:21.674101   80033 api_server.go:279] https://192.168.72.98:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 15:23:21.674140   80033 api_server.go:103] status: https://192.168.72.98:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 15:23:21.674152   80033 api_server.go:253] Checking apiserver healthz at https://192.168.72.98:8443/healthz ...
	I1014 15:23:21.693508   80033 api_server.go:279] https://192.168.72.98:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 15:23:21.693535   80033 api_server.go:103] status: https://192.168.72.98:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 15:23:21.777773   80033 api_server.go:253] Checking apiserver healthz at https://192.168.72.98:8443/healthz ...
	I1014 15:23:21.792345   80033 api_server.go:279] https://192.168.72.98:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:23:21.792372   80033 api_server.go:103] status: https://192.168.72.98:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:23:22.277457   80033 api_server.go:253] Checking apiserver healthz at https://192.168.72.98:8443/healthz ...
	I1014 15:23:22.283011   80033 api_server.go:279] https://192.168.72.98:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:23:22.283050   80033 api_server.go:103] status: https://192.168.72.98:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:23:22.778292   80033 api_server.go:253] Checking apiserver healthz at https://192.168.72.98:8443/healthz ...
	I1014 15:23:22.788283   80033 api_server.go:279] https://192.168.72.98:8443/healthz returned 200:
	ok
	I1014 15:23:22.798675   80033 api_server.go:141] control plane version: v1.31.1
	I1014 15:23:22.798706   80033 api_server.go:131] duration metric: took 3.521346857s to wait for apiserver health ...
	I1014 15:23:22.798717   80033 cni.go:84] Creating CNI manager for ""
	I1014 15:23:22.798726   80033 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:23:22.800913   80033 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 15:23:22.802392   80033 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 15:23:22.813416   80033 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 15:23:22.834464   80033 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 15:23:22.851183   80033 system_pods.go:59] 9 kube-system pods found
	I1014 15:23:22.851215   80033 system_pods.go:61] "coredns-7c65d6cfc9-lvsmv" [902be9c5-f481-44d2-bd01-8a73c3ae19eb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 15:23:22.851223   80033 system_pods.go:61] "coredns-7c65d6cfc9-ptjtw" [63ab43c8-b7dc-42ca-b711-5b1b2c05f142] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 15:23:22.851230   80033 system_pods.go:61] "etcd-newest-cni-870289" [0a2d5e1e-170b-49fe-b32b-cdfc8ff952e2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 15:23:22.851234   80033 system_pods.go:61] "kube-apiserver-newest-cni-870289" [1f20c934-17c6-44f4-b02a-102f638592d2] Running
	I1014 15:23:22.851240   80033 system_pods.go:61] "kube-controller-manager-newest-cni-870289" [f9740464-04d9-4a55-86d2-044bf20f4ef9] Running
	I1014 15:23:22.851244   80033 system_pods.go:61] "kube-proxy-ttks8" [88131fe1-179f-4044-b7f7-e72ab259d214] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1014 15:23:22.851248   80033 system_pods.go:61] "kube-scheduler-newest-cni-870289" [462467d3-1f97-4597-acec-807e6209d12f] Running
	I1014 15:23:22.851258   80033 system_pods.go:61] "metrics-server-6867b74b74-xthtb" [e6aa8e67-abfd-402a-8e7c-746ec079f383] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:23:22.851270   80033 system_pods.go:61] "storage-provisioner" [a5d9f375-a7ca-49bd-b02c-c1fa8e90ce35] Running
	I1014 15:23:22.851276   80033 system_pods.go:74] duration metric: took 16.789917ms to wait for pod list to return data ...
	I1014 15:23:22.851284   80033 node_conditions.go:102] verifying NodePressure condition ...
	I1014 15:23:22.860731   80033 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 15:23:22.860760   80033 node_conditions.go:123] node cpu capacity is 2
	I1014 15:23:22.860773   80033 node_conditions.go:105] duration metric: took 9.484904ms to run NodePressure ...
	I1014 15:23:22.860793   80033 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:23:23.373249   80033 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 15:23:23.438008   80033 ops.go:34] apiserver oom_adj: -16
	I1014 15:23:23.438037   80033 kubeadm.go:597] duration metric: took 6.959339497s to restartPrimaryControlPlane
	I1014 15:23:23.438047   80033 kubeadm.go:394] duration metric: took 7.017207165s to StartCluster
	I1014 15:23:23.438063   80033 settings.go:142] acquiring lock: {Name:mk9f6f6b9dc8c3435472077cbb9091b8e648d1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:23:23.438137   80033 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 15:23:23.438927   80033 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/kubeconfig: {Name:mk17dd47b52fd7a8ee17f563c35b22ef1b7788f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:23:23.439163   80033 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.98 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 15:23:23.439263   80033 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 15:23:23.439349   80033 config.go:182] Loaded profile config "newest-cni-870289": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 15:23:23.439390   80033 addons.go:69] Setting default-storageclass=true in profile "newest-cni-870289"
	I1014 15:23:23.439394   80033 addons.go:69] Setting dashboard=true in profile "newest-cni-870289"
	I1014 15:23:23.439460   80033 addons.go:234] Setting addon dashboard=true in "newest-cni-870289"
	I1014 15:23:23.439434   80033 addons.go:69] Setting metrics-server=true in profile "newest-cni-870289"
	I1014 15:23:23.439461   80033 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-870289"
	W1014 15:23:23.439479   80033 addons.go:243] addon dashboard should already be in state true
	I1014 15:23:23.439492   80033 addons.go:234] Setting addon metrics-server=true in "newest-cni-870289"
	W1014 15:23:23.439510   80033 addons.go:243] addon metrics-server should already be in state true
	I1014 15:23:23.439532   80033 host.go:66] Checking if "newest-cni-870289" exists ...
	I1014 15:23:23.439568   80033 host.go:66] Checking if "newest-cni-870289" exists ...
	I1014 15:23:23.440015   80033 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:23:23.440034   80033 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:23:23.440065   80033 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:23:23.440070   80033 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-870289"
	I1014 15:23:23.440095   80033 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:23:23.440105   80033 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-870289"
	W1014 15:23:23.440113   80033 addons.go:243] addon storage-provisioner should already be in state true
	I1014 15:23:23.440024   80033 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:23:23.440154   80033 host.go:66] Checking if "newest-cni-870289" exists ...
	I1014 15:23:23.440171   80033 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:23:23.440532   80033 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:23:23.440567   80033 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:23:23.440729   80033 out.go:177] * Verifying Kubernetes components...
	I1014 15:23:23.442023   80033 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:23:23.456617   80033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39967
	I1014 15:23:23.456626   80033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40191
	I1014 15:23:23.457199   80033 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:23:23.457310   80033 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:23:23.457843   80033 main.go:141] libmachine: Using API Version  1
	I1014 15:23:23.457870   80033 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:23:23.458250   80033 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:23:23.458861   80033 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:23:23.458893   80033 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:23:23.458935   80033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44059
	I1014 15:23:23.459009   80033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43331
	I1014 15:23:23.459414   80033 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:23:23.459528   80033 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:23:23.459629   80033 main.go:141] libmachine: Using API Version  1
	I1014 15:23:23.459666   80033 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:23:23.460062   80033 main.go:141] libmachine: Using API Version  1
	I1014 15:23:23.460081   80033 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:23:23.460121   80033 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:23:23.460267   80033 main.go:141] libmachine: Using API Version  1
	I1014 15:23:23.460292   80033 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:23:23.460522   80033 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:23:23.460570   80033 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:23:23.460599   80033 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:23:23.460761   80033 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:23:23.460933   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetState
	I1014 15:23:23.463678   80033 addons.go:234] Setting addon default-storageclass=true in "newest-cni-870289"
	W1014 15:23:23.463699   80033 addons.go:243] addon default-storageclass should already be in state true
	I1014 15:23:23.463728   80033 host.go:66] Checking if "newest-cni-870289" exists ...
	I1014 15:23:23.464097   80033 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:23:23.464121   80033 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:23:23.467198   80033 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:23:23.467226   80033 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:23:23.478456   80033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40585
	I1014 15:23:23.478708   80033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36133
	I1014 15:23:23.479064   80033 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:23:23.479322   80033 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:23:23.479458   80033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42983
	I1014 15:23:23.479790   80033 main.go:141] libmachine: Using API Version  1
	I1014 15:23:23.479808   80033 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:23:23.479876   80033 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:23:23.479919   80033 main.go:141] libmachine: Using API Version  1
	I1014 15:23:23.479931   80033 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:23:23.480237   80033 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:23:23.480432   80033 main.go:141] libmachine: Using API Version  1
	I1014 15:23:23.480449   80033 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:23:23.480536   80033 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:23:23.480704   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetState
	I1014 15:23:23.480813   80033 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:23:23.480882   80033 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:23:23.480912   80033 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:23:23.481051   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetState
	I1014 15:23:23.482973   80033 main.go:141] libmachine: (newest-cni-870289) Calling .DriverName
	I1014 15:23:23.483026   80033 main.go:141] libmachine: (newest-cni-870289) Calling .DriverName
	I1014 15:23:23.484691   80033 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:23:23.484697   80033 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1014 15:23:23.486016   80033 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1014 15:23:23.486179   80033 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 15:23:23.486201   80033 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 15:23:23.486218   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHHostname
	I1014 15:23:23.487250   80033 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1014 15:23:23.487263   80033 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1014 15:23:23.487275   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHHostname
	I1014 15:23:23.489862   80033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34341
	I1014 15:23:23.490197   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:23.490838   80033 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:23:01 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:23:23.490870   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:23.491075   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHPort
	I1014 15:23:23.491239   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:23.491294   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:23:23.491462   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHUsername
	I1014 15:23:23.491602   80033 sshutil.go:53] new ssh client: &{IP:192.168.72.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/newest-cni-870289/id_rsa Username:docker}
	I1014 15:23:23.491653   80033 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:23:01 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:23:23.491757   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:23.491782   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHPort
	I1014 15:23:23.491935   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:23:23.492105   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHUsername
	I1014 15:23:23.492259   80033 sshutil.go:53] new ssh client: &{IP:192.168.72.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/newest-cni-870289/id_rsa Username:docker}
	I1014 15:23:23.511296   80033 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:23:23.511943   80033 main.go:141] libmachine: Using API Version  1
	I1014 15:23:23.511975   80033 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:23:23.512369   80033 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:23:23.512703   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetState
	I1014 15:23:23.514627   80033 main.go:141] libmachine: (newest-cni-870289) Calling .DriverName
	I1014 15:23:23.518820   80033 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1014 15:23:23.520174   80033 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1014 15:23:23.520188   80033 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1014 15:23:23.520208   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHHostname
	I1014 15:23:23.523431   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:23.523926   80033 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:23:01 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:23:23.523971   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:23.524115   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHPort
	I1014 15:23:23.524294   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:23:23.524432   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHUsername
	I1014 15:23:23.524564   80033 sshutil.go:53] new ssh client: &{IP:192.168.72.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/newest-cni-870289/id_rsa Username:docker}
	I1014 15:23:23.529313   80033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38127
	I1014 15:23:23.529738   80033 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:23:23.530698   80033 main.go:141] libmachine: Using API Version  1
	I1014 15:23:23.530724   80033 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:23:23.532186   80033 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:23:23.532404   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetState
	I1014 15:23:23.534114   80033 main.go:141] libmachine: (newest-cni-870289) Calling .DriverName
	I1014 15:23:23.534321   80033 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 15:23:23.534333   80033 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 15:23:23.534353   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHHostname
	I1014 15:23:23.537535   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:23.538013   80033 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:23:01 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:23:23.538032   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:23.538196   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHPort
	I1014 15:23:23.538346   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:23:23.538532   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHUsername
	I1014 15:23:23.538679   80033 sshutil.go:53] new ssh client: &{IP:192.168.72.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/newest-cni-870289/id_rsa Username:docker}
	I1014 15:23:23.700794   80033 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:23:23.724731   80033 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:23:23.724814   80033 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:23:23.748048   80033 api_server.go:72] duration metric: took 308.845343ms to wait for apiserver process to appear ...
	I1014 15:23:23.748081   80033 api_server.go:88] waiting for apiserver healthz status ...
	I1014 15:23:23.748104   80033 api_server.go:253] Checking apiserver healthz at https://192.168.72.98:8443/healthz ...
	I1014 15:23:23.753125   80033 api_server.go:279] https://192.168.72.98:8443/healthz returned 200:
	ok
	I1014 15:23:23.754186   80033 api_server.go:141] control plane version: v1.31.1
	I1014 15:23:23.754212   80033 api_server.go:131] duration metric: took 6.122318ms to wait for apiserver health ...
	I1014 15:23:23.754222   80033 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 15:23:23.762073   80033 system_pods.go:59] 8 kube-system pods found
	I1014 15:23:23.762113   80033 system_pods.go:61] "coredns-7c65d6cfc9-lvsmv" [902be9c5-f481-44d2-bd01-8a73c3ae19eb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 15:23:23.762124   80033 system_pods.go:61] "etcd-newest-cni-870289" [0a2d5e1e-170b-49fe-b32b-cdfc8ff952e2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 15:23:23.762140   80033 system_pods.go:61] "kube-apiserver-newest-cni-870289" [1f20c934-17c6-44f4-b02a-102f638592d2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 15:23:23.762148   80033 system_pods.go:61] "kube-controller-manager-newest-cni-870289" [f9740464-04d9-4a55-86d2-044bf20f4ef9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 15:23:23.762161   80033 system_pods.go:61] "kube-proxy-ttks8" [88131fe1-179f-4044-b7f7-e72ab259d214] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1014 15:23:23.762173   80033 system_pods.go:61] "kube-scheduler-newest-cni-870289" [462467d3-1f97-4597-acec-807e6209d12f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 15:23:23.762184   80033 system_pods.go:61] "metrics-server-6867b74b74-xthtb" [e6aa8e67-abfd-402a-8e7c-746ec079f383] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:23:23.762202   80033 system_pods.go:61] "storage-provisioner" [a5d9f375-a7ca-49bd-b02c-c1fa8e90ce35] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 15:23:23.762214   80033 system_pods.go:74] duration metric: took 7.984726ms to wait for pod list to return data ...
	I1014 15:23:23.762231   80033 default_sa.go:34] waiting for default service account to be created ...
	I1014 15:23:23.765209   80033 default_sa.go:45] found service account: "default"
	I1014 15:23:23.765229   80033 default_sa.go:55] duration metric: took 2.99273ms for default service account to be created ...
	I1014 15:23:23.765240   80033 kubeadm.go:582] duration metric: took 326.045982ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1014 15:23:23.765260   80033 node_conditions.go:102] verifying NodePressure condition ...
	I1014 15:23:23.768311   80033 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 15:23:23.768329   80033 node_conditions.go:123] node cpu capacity is 2
	I1014 15:23:23.768341   80033 node_conditions.go:105] duration metric: took 3.0731ms to run NodePressure ...
	I1014 15:23:23.768352   80033 start.go:241] waiting for startup goroutines ...
	I1014 15:23:23.801265   80033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 15:23:23.842577   80033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 15:23:23.854525   80033 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1014 15:23:23.854548   80033 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1014 15:23:23.879400   80033 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1014 15:23:23.879422   80033 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1014 15:23:23.959889   80033 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1014 15:23:23.959920   80033 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1014 15:23:24.013258   80033 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1014 15:23:24.013279   80033 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1014 15:23:24.049064   80033 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1014 15:23:24.049087   80033 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1014 15:23:24.080101   80033 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 15:23:24.080127   80033 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1014 15:23:24.135656   80033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 15:23:24.150539   80033 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1014 15:23:24.150561   80033 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1014 15:23:24.221883   80033 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1014 15:23:24.221906   80033 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1014 15:23:24.304356   80033 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1014 15:23:24.304382   80033 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1014 15:23:24.385080   80033 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1014 15:23:24.385109   80033 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1014 15:23:24.455916   80033 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1014 15:23:24.455939   80033 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1014 15:23:24.560570   80033 main.go:141] libmachine: Making call to close driver server
	I1014 15:23:24.560604   80033 main.go:141] libmachine: (newest-cni-870289) Calling .Close
	I1014 15:23:24.560885   80033 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:23:24.560909   80033 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:23:24.560918   80033 main.go:141] libmachine: Making call to close driver server
	I1014 15:23:24.560925   80033 main.go:141] libmachine: (newest-cni-870289) Calling .Close
	I1014 15:23:24.561156   80033 main.go:141] libmachine: (newest-cni-870289) DBG | Closing plugin on server side
	I1014 15:23:24.561206   80033 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:23:24.561226   80033 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:23:24.565357   80033 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1014 15:23:24.565377   80033 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1014 15:23:24.583482   80033 main.go:141] libmachine: Making call to close driver server
	I1014 15:23:24.583507   80033 main.go:141] libmachine: (newest-cni-870289) Calling .Close
	I1014 15:23:24.583785   80033 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:23:24.583803   80033 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:23:24.629522   80033 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1014 15:23:25.780266   80033 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.937651631s)
	I1014 15:23:25.780322   80033 main.go:141] libmachine: Making call to close driver server
	I1014 15:23:25.780326   80033 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.644631418s)
	I1014 15:23:25.780367   80033 main.go:141] libmachine: Making call to close driver server
	I1014 15:23:25.780390   80033 main.go:141] libmachine: (newest-cni-870289) Calling .Close
	I1014 15:23:25.780333   80033 main.go:141] libmachine: (newest-cni-870289) Calling .Close
	I1014 15:23:25.780650   80033 main.go:141] libmachine: (newest-cni-870289) DBG | Closing plugin on server side
	I1014 15:23:25.780663   80033 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:23:25.780713   80033 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:23:25.780723   80033 main.go:141] libmachine: Making call to close driver server
	I1014 15:23:25.780733   80033 main.go:141] libmachine: (newest-cni-870289) Calling .Close
	I1014 15:23:25.782176   80033 main.go:141] libmachine: (newest-cni-870289) DBG | Closing plugin on server side
	I1014 15:23:25.782203   80033 main.go:141] libmachine: (newest-cni-870289) DBG | Closing plugin on server side
	I1014 15:23:25.782203   80033 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:23:25.782248   80033 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:23:25.782234   80033 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:23:25.782276   80033 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:23:25.782288   80033 main.go:141] libmachine: Making call to close driver server
	I1014 15:23:25.782257   80033 addons.go:475] Verifying addon metrics-server=true in "newest-cni-870289"
	I1014 15:23:25.782297   80033 main.go:141] libmachine: (newest-cni-870289) Calling .Close
	I1014 15:23:25.782525   80033 main.go:141] libmachine: (newest-cni-870289) DBG | Closing plugin on server side
	I1014 15:23:25.782561   80033 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:23:25.782569   80033 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:23:26.119133   80033 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.48955713s)
	I1014 15:23:26.119186   80033 main.go:141] libmachine: Making call to close driver server
	I1014 15:23:26.119201   80033 main.go:141] libmachine: (newest-cni-870289) Calling .Close
	I1014 15:23:26.119492   80033 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:23:26.119514   80033 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:23:26.119525   80033 main.go:141] libmachine: Making call to close driver server
	I1014 15:23:26.119534   80033 main.go:141] libmachine: (newest-cni-870289) Calling .Close
	I1014 15:23:26.119747   80033 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:23:26.119760   80033 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:23:26.119816   80033 main.go:141] libmachine: (newest-cni-870289) DBG | Closing plugin on server side
	I1014 15:23:26.121358   80033 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-870289 addons enable metrics-server
	
	I1014 15:23:26.122801   80033 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	I1014 15:23:26.124157   80033 addons.go:510] duration metric: took 2.684893018s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
	I1014 15:23:26.124190   80033 start.go:246] waiting for cluster config update ...
	I1014 15:23:26.124202   80033 start.go:255] writing updated cluster config ...
	I1014 15:23:26.124455   80033 ssh_runner.go:195] Run: rm -f paused
	I1014 15:23:26.171876   80033 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1014 15:23:26.173956   80033 out.go:177] * Done! kubectl is now configured to use "newest-cni-870289" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 14 15:23:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:23:35.947613869Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=07344b5c-ea76-45e7-a171-e882b9e62799 name=/runtime.v1.RuntimeService/Version
	Oct 14 15:23:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:23:35.948886796Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=870e83c3-ab3f-4bbf-8537-663cf42d7c3f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:23:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:23:35.949384197Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919415949361797,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=870e83c3-ab3f-4bbf-8537-663cf42d7c3f name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:23:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:23:35.949956601Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9d63a1ba-60f4-4548-9c75-4e7c339fcfa2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:23:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:23:35.950008628Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9d63a1ba-60f4-4548-9c75-4e7c339fcfa2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:23:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:23:35.950323024Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81,PodSandboxId:0590d28e358c8bc722e9016e9814871f9cf67cef6809256acd1cc1c1e2b232a6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728918155199929716,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62925b5e-ec1d-4d5b-aa70-a4fc555db52d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1d58f06c02f6e31d834478886bd991508a4c2d9ad0258aa93225671f6be6f38,PodSandboxId:dfcbb62af0cc631a713d54cf52d9adb4854a7c54c6f6ccabdb5f541e2ac16c06,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728918142397616012,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 73313975-3d02-4629-9437-ec78b344b297,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1,PodSandboxId:106c488f9ab21922d4afc6f3b4b3bbcb764633957969a4df9c459a1bc760a32e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728918140174257069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-994hx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0291ce4-5503-4bb1-8e36-d956b115c3ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42,PodSandboxId:22a3d648f9dff8c686309f7ad847156012da9c4532a6b36eb70f1ba51aa68ccd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728918124372675326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rh82t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dcd3c39-1
bfe-40ac-a012-ea17ea1dfb6d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076,PodSandboxId:0590d28e358c8bc722e9016e9814871f9cf67cef6809256acd1cc1c1e2b232a6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728918124367753623,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62925b5e-ec1d-4d5b-aa70
-a4fc555db52d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa,PodSandboxId:4cbb5db056a6dba0383ea5131f1101858340f24129fc12defb065e22b55f928d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728918120665314822,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-201291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 372e03020d4971676e1f7
f514f4974ea,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69,PodSandboxId:d7742a4d0ed600db11f7c8793ea86ae3867317ab9d22681470466204d33be567,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728918120678989754,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-201291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be9d4fbf7ec17f9514254bcca1b63f7d,},Annotations:map[st
ring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4,PodSandboxId:0a660b7b688faaa376e5b891709378c5b72c2a909aca0846087ac335c41d32e0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728918120647059319,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-201291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ea78d3f249f4ed9fd101799a78d
3e57,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f,PodSandboxId:9cf4262d69c300e4fd67e0da5d27d90a583fed442f31b9849ac60444feb6eccd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728918120661993982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-201291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba36f4d10fff5c44627000ddc1e694
71,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9d63a1ba-60f4-4548-9c75-4e7c339fcfa2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:23:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:23:35.986406702Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a3b34667-def2-4867-ab42-6f2bd8cbd104 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 14 15:23:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:23:35.986678117Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:dfcbb62af0cc631a713d54cf52d9adb4854a7c54c6f6ccabdb5f541e2ac16c06,Metadata:&PodSandboxMetadata{Name:busybox,Uid:73313975-3d02-4629-9437-ec78b344b297,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728918139801732047,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 73313975-3d02-4629-9437-ec78b344b297,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-14T15:02:03.890651043Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:106c488f9ab21922d4afc6f3b4b3bbcb764633957969a4df9c459a1bc760a32e,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-994hx,Uid:b0291ce4-5503-4bb1-8e36-d956b115c3ac,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:172891
8139801189808,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-994hx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0291ce4-5503-4bb1-8e36-d956b115c3ac,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-14T15:02:03.890669131Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cbd200d097ef6e6f0d502f5cd5f3d38cf52346803ce5931f1f5e7ca8da8112f6,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-bcrqs,Uid:508697cd-cf31-4078-8985-5c0b77966695,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728918132000482644,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-bcrqs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 508697cd-cf31-4078-8985-5c0b77966695,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-10-14
T15:02:03.890666046Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0590d28e358c8bc722e9016e9814871f9cf67cef6809256acd1cc1c1e2b232a6,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:62925b5e-ec1d-4d5b-aa70-a4fc555db52d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728918124209769012,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62925b5e-ec1d-4d5b-aa70-a4fc555db52d,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"g
cr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-10-14T15:02:03.890667650Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:22a3d648f9dff8c686309f7ad847156012da9c4532a6b36eb70f1ba51aa68ccd,Metadata:&PodSandboxMetadata{Name:kube-proxy-rh82t,Uid:1dcd3c39-1bfe-40ac-a012-ea17ea1dfb6d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728918124207261687,Labels:map[string]string{controller-revision-hash: 648b489c5b,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-rh82t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dcd3c39-1bfe-40ac-a012-ea17ea1dfb6d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{ku
bernetes.io/config.seen: 2024-10-14T15:02:03.890670419Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d7742a4d0ed600db11f7c8793ea86ae3867317ab9d22681470466204d33be567,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-201291,Uid:be9d4fbf7ec17f9514254bcca1b63f7d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728918120433817643,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-201291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be9d4fbf7ec17f9514254bcca1b63f7d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.128:2379,kubernetes.io/config.hash: be9d4fbf7ec17f9514254bcca1b63f7d,kubernetes.io/config.seen: 2024-10-14T15:01:59.961261918Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9cf4262d69c300e4fd67e0da5d27d90a583fed442f31b9849ac60444feb6eccd,Metadata:&PodSandboxMetadata{Name:
kube-apiserver-default-k8s-diff-port-201291,Uid:ba36f4d10fff5c44627000ddc1e69471,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728918120413507726,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-201291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba36f4d10fff5c44627000ddc1e69471,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.128:8444,kubernetes.io/config.hash: ba36f4d10fff5c44627000ddc1e69471,kubernetes.io/config.seen: 2024-10-14T15:01:59.895011643Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4cbb5db056a6dba0383ea5131f1101858340f24129fc12defb065e22b55f928d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-201291,Uid:372e03020d4971676e1f7f514f4974ea,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728918120412025992,Labels:map[string]str
ing{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-201291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 372e03020d4971676e1f7f514f4974ea,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 372e03020d4971676e1f7f514f4974ea,kubernetes.io/config.seen: 2024-10-14T15:01:59.895010499Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0a660b7b688faaa376e5b891709378c5b72c2a909aca0846087ac335c41d32e0,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-201291,Uid:8ea78d3f249f4ed9fd101799a78d3e57,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1728918120403961208,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-201291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ea78d3f249f4ed9fd101799a78d3e57,tier: control-p
lane,},Annotations:map[string]string{kubernetes.io/config.hash: 8ea78d3f249f4ed9fd101799a78d3e57,kubernetes.io/config.seen: 2024-10-14T15:01:59.895006976Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=a3b34667-def2-4867-ab42-6f2bd8cbd104 name=/runtime.v1.RuntimeService/ListPodSandbox
	Oct 14 15:23:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:23:35.987373747Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=07651764-7c5d-4fb2-a5f8-e03cbcd9a491 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:23:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:23:35.987427313Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=07651764-7c5d-4fb2-a5f8-e03cbcd9a491 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:23:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:23:35.987616062Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81,PodSandboxId:0590d28e358c8bc722e9016e9814871f9cf67cef6809256acd1cc1c1e2b232a6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728918155199929716,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62925b5e-ec1d-4d5b-aa70-a4fc555db52d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1d58f06c02f6e31d834478886bd991508a4c2d9ad0258aa93225671f6be6f38,PodSandboxId:dfcbb62af0cc631a713d54cf52d9adb4854a7c54c6f6ccabdb5f541e2ac16c06,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728918142397616012,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 73313975-3d02-4629-9437-ec78b344b297,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1,PodSandboxId:106c488f9ab21922d4afc6f3b4b3bbcb764633957969a4df9c459a1bc760a32e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728918140174257069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-994hx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0291ce4-5503-4bb1-8e36-d956b115c3ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42,PodSandboxId:22a3d648f9dff8c686309f7ad847156012da9c4532a6b36eb70f1ba51aa68ccd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728918124372675326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rh82t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dcd3c39-1
bfe-40ac-a012-ea17ea1dfb6d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa,PodSandboxId:4cbb5db056a6dba0383ea5131f1101858340f24129fc12defb065e22b55f928d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728918120665314822,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-201291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 372e0302
0d4971676e1f7f514f4974ea,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69,PodSandboxId:d7742a4d0ed600db11f7c8793ea86ae3867317ab9d22681470466204d33be567,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728918120678989754,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-201291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be9d4fbf7ec17f9514254bcca1b63f7d,},Annot
ations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4,PodSandboxId:0a660b7b688faaa376e5b891709378c5b72c2a909aca0846087ac335c41d32e0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728918120647059319,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-201291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ea78d3f249f4ed
9fd101799a78d3e57,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f,PodSandboxId:9cf4262d69c300e4fd67e0da5d27d90a583fed442f31b9849ac60444feb6eccd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728918120661993982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-201291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba36f4d10fff5c446
27000ddc1e69471,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=07651764-7c5d-4fb2-a5f8-e03cbcd9a491 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:23:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:23:35.996306131Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=01133f46-e887-43d3-8350-df99855a0e67 name=/runtime.v1.RuntimeService/Version
	Oct 14 15:23:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:23:35.996637431Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=01133f46-e887-43d3-8350-df99855a0e67 name=/runtime.v1.RuntimeService/Version
	Oct 14 15:23:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:23:35.998005782Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7682a7f2-959d-4226-8c42-5c5fbc7eeccf name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:23:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:23:35.998470764Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919415998451597,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7682a7f2-959d-4226-8c42-5c5fbc7eeccf name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:23:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:23:35.999150876Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=629b73ca-8ac7-4122-9929-da6b33021de5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:23:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:23:35.999201647Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=629b73ca-8ac7-4122-9929-da6b33021de5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:23:35 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:23:35.999392231Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81,PodSandboxId:0590d28e358c8bc722e9016e9814871f9cf67cef6809256acd1cc1c1e2b232a6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728918155199929716,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62925b5e-ec1d-4d5b-aa70-a4fc555db52d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1d58f06c02f6e31d834478886bd991508a4c2d9ad0258aa93225671f6be6f38,PodSandboxId:dfcbb62af0cc631a713d54cf52d9adb4854a7c54c6f6ccabdb5f541e2ac16c06,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728918142397616012,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 73313975-3d02-4629-9437-ec78b344b297,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1,PodSandboxId:106c488f9ab21922d4afc6f3b4b3bbcb764633957969a4df9c459a1bc760a32e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728918140174257069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-994hx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0291ce4-5503-4bb1-8e36-d956b115c3ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42,PodSandboxId:22a3d648f9dff8c686309f7ad847156012da9c4532a6b36eb70f1ba51aa68ccd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728918124372675326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rh82t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dcd3c39-1
bfe-40ac-a012-ea17ea1dfb6d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076,PodSandboxId:0590d28e358c8bc722e9016e9814871f9cf67cef6809256acd1cc1c1e2b232a6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728918124367753623,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62925b5e-ec1d-4d5b-aa70
-a4fc555db52d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa,PodSandboxId:4cbb5db056a6dba0383ea5131f1101858340f24129fc12defb065e22b55f928d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728918120665314822,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-201291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 372e03020d4971676e1f7
f514f4974ea,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69,PodSandboxId:d7742a4d0ed600db11f7c8793ea86ae3867317ab9d22681470466204d33be567,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728918120678989754,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-201291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be9d4fbf7ec17f9514254bcca1b63f7d,},Annotations:map[st
ring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4,PodSandboxId:0a660b7b688faaa376e5b891709378c5b72c2a909aca0846087ac335c41d32e0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728918120647059319,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-201291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ea78d3f249f4ed9fd101799a78d
3e57,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f,PodSandboxId:9cf4262d69c300e4fd67e0da5d27d90a583fed442f31b9849ac60444feb6eccd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728918120661993982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-201291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba36f4d10fff5c44627000ddc1e694
71,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=629b73ca-8ac7-4122-9929-da6b33021de5 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:23:36 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:23:36.033191160Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9ac62782-ea0e-43c0-b00b-265b3c40496b name=/runtime.v1.RuntimeService/Version
	Oct 14 15:23:36 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:23:36.033272430Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9ac62782-ea0e-43c0-b00b-265b3c40496b name=/runtime.v1.RuntimeService/Version
	Oct 14 15:23:36 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:23:36.034070573Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=920f36b8-ddd8-42ab-95aa-9eeec91925c9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:23:36 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:23:36.034505338Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919416034483933,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=920f36b8-ddd8-42ab-95aa-9eeec91925c9 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:23:36 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:23:36.035203970Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=79155e2e-e435-4b62-b975-d4e5f13c9074 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:23:36 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:23:36.035271054Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=79155e2e-e435-4b62-b975-d4e5f13c9074 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:23:36 default-k8s-diff-port-201291 crio[704]: time="2024-10-14 15:23:36.035460584Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81,PodSandboxId:0590d28e358c8bc722e9016e9814871f9cf67cef6809256acd1cc1c1e2b232a6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728918155199929716,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62925b5e-ec1d-4d5b-aa70-a4fc555db52d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 3,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d1d58f06c02f6e31d834478886bd991508a4c2d9ad0258aa93225671f6be6f38,PodSandboxId:dfcbb62af0cc631a713d54cf52d9adb4854a7c54c6f6ccabdb5f541e2ac16c06,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1728918142397616012,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 73313975-3d02-4629-9437-ec78b344b297,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1,PodSandboxId:106c488f9ab21922d4afc6f3b4b3bbcb764633957969a4df9c459a1bc760a32e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728918140174257069,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-994hx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b0291ce4-5503-4bb1-8e36-d956b115c3ac,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\
"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42,PodSandboxId:22a3d648f9dff8c686309f7ad847156012da9c4532a6b36eb70f1ba51aa68ccd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:1728918124372675326,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-rh82t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1dcd3c39-1
bfe-40ac-a012-ea17ea1dfb6d,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076,PodSandboxId:0590d28e358c8bc722e9016e9814871f9cf67cef6809256acd1cc1c1e2b232a6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1728918124367753623,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62925b5e-ec1d-4d5b-aa70
-a4fc555db52d,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa,PodSandboxId:4cbb5db056a6dba0383ea5131f1101858340f24129fc12defb065e22b55f928d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728918120665314822,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-201291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 372e03020d4971676e1f7
f514f4974ea,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69,PodSandboxId:d7742a4d0ed600db11f7c8793ea86ae3867317ab9d22681470466204d33be567,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728918120678989754,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-201291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be9d4fbf7ec17f9514254bcca1b63f7d,},Annotations:map[st
ring]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4,PodSandboxId:0a660b7b688faaa376e5b891709378c5b72c2a909aca0846087ac335c41d32e0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728918120647059319,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-201291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ea78d3f249f4ed9fd101799a78d
3e57,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f,PodSandboxId:9cf4262d69c300e4fd67e0da5d27d90a583fed442f31b9849ac60444feb6eccd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728918120661993982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-201291,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba36f4d10fff5c44627000ddc1e694
71,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=79155e2e-e435-4b62-b975-d4e5f13c9074 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	54da9997e909c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Running             storage-provisioner       3                   0590d28e358c8       storage-provisioner
	d1d58f06c02f6       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   21 minutes ago      Running             busybox                   1                   dfcbb62af0cc6       busybox
	6e3748f01b40b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      21 minutes ago      Running             coredns                   1                   106c488f9ab21       coredns-7c65d6cfc9-994hx
	8562700fa08dc       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                      21 minutes ago      Running             kube-proxy                1                   22a3d648f9dff       kube-proxy-rh82t
	48bc323790016       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Exited              storage-provisioner       2                   0590d28e358c8       storage-provisioner
	0aaa149381e52       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      21 minutes ago      Running             etcd                      1                   d7742a4d0ed60       etcd-default-k8s-diff-port-201291
	be2f06f84e6b5       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                      21 minutes ago      Running             kube-scheduler            1                   4cbb5db056a6d       kube-scheduler-default-k8s-diff-port-201291
	a2df52bb84059       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                      21 minutes ago      Running             kube-apiserver            1                   9cf4262d69c30       kube-apiserver-default-k8s-diff-port-201291
	7cfcaa231ef94       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                      21 minutes ago      Running             kube-controller-manager   1                   0a660b7b688fa       kube-controller-manager-default-k8s-diff-port-201291
	
	
	==> coredns [6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:39059 - 41192 "HINFO IN 4260166790663280947.876893321338102758. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010564947s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-201291
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-201291
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=default-k8s-diff-port-201291
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_14T14_54_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 14:54:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-201291
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 15:23:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 15:22:57 +0000   Mon, 14 Oct 2024 14:54:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 15:22:57 +0000   Mon, 14 Oct 2024 14:54:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 15:22:57 +0000   Mon, 14 Oct 2024 14:54:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 15:22:57 +0000   Mon, 14 Oct 2024 15:02:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.128
	  Hostname:    default-k8s-diff-port-201291
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 f564671d50a747d2bc6d8c9c9f526232
	  System UUID:                f564671d-50a7-47d2-bc6d-8c9c9f526232
	  Boot ID:                    e3eff562-b446-40cd-8029-d7dae929ab92
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-7c65d6cfc9-994hx                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     29m
	  kube-system                 etcd-default-k8s-diff-port-201291                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-201291             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-201291    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-rh82t                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-default-k8s-diff-port-201291             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 metrics-server-6867b74b74-bcrqs                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node default-k8s-diff-port-201291 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-201291 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-201291 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-201291 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-201291 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-201291 status is now: NodeHasSufficientPID
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-201291 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node default-k8s-diff-port-201291 event: Registered Node default-k8s-diff-port-201291 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-201291 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-201291 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-201291 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-201291 event: Registered Node default-k8s-diff-port-201291 in Controller
	
	
	==> dmesg <==
	[Oct14 15:01] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051014] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.053472] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.982749] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.645472] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.619559] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.355308] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.060196] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060724] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.224205] systemd-fstab-generator[654]: Ignoring "noauto" option for root device
	[  +0.136829] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[  +0.316922] systemd-fstab-generator[695]: Ignoring "noauto" option for root device
	[  +4.303840] systemd-fstab-generator[788]: Ignoring "noauto" option for root device
	[  +0.060399] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.936699] systemd-fstab-generator[906]: Ignoring "noauto" option for root device
	[Oct14 15:02] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.945044] systemd-fstab-generator[1542]: Ignoring "noauto" option for root device
	[  +4.781615] kauditd_printk_skb: 64 callbacks suppressed
	[  +7.803811] kauditd_printk_skb: 11 callbacks suppressed
	[ +15.422858] kauditd_printk_skb: 28 callbacks suppressed
	
	
	==> etcd [0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69] <==
	{"level":"info","ts":"2024-10-14T15:02:02.329352Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.128:2379"}
	{"level":"warn","ts":"2024-10-14T15:02:18.921431Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"162.404591ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3261893158185404564 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-201291\" mod_revision:629 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-201291\" value_size:6828 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-201291\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-10-14T15:02:18.921560Z","caller":"traceutil/trace.go:171","msg":"trace[919984271] linearizableReadLoop","detail":"{readStateIndex:671; appliedIndex:670; }","duration":"147.617318ms","start":"2024-10-14T15:02:18.773928Z","end":"2024-10-14T15:02:18.921545Z","steps":["trace[919984271] 'read index received'  (duration: 26.001µs)","trace[919984271] 'applied index is now lower than readState.Index'  (duration: 147.590247ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-14T15:02:18.921782Z","caller":"traceutil/trace.go:171","msg":"trace[1500510879] transaction","detail":"{read_only:false; response_revision:630; number_of_response:1; }","duration":"309.236388ms","start":"2024-10-14T15:02:18.612534Z","end":"2024-10-14T15:02:18.921771Z","steps":["trace[1500510879] 'process raft request'  (duration: 145.776831ms)","trace[1500510879] 'compare'  (duration: 162.138668ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-14T15:02:18.921921Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-14T15:02:18.612516Z","time spent":"309.322705ms","remote":"127.0.0.1:56754","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6906,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-201291\" mod_revision:629 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-201291\" value_size:6828 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-201291\" > >"}
	{"level":"warn","ts":"2024-10-14T15:02:19.083280Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.024219ms","expected-duration":"100ms","prefix":"","request":"header:<ID:3261893158185404566 > lease_revoke:<id:2d44928b85ec0430>","response":"size:29"}
	{"level":"info","ts":"2024-10-14T15:02:19.083385Z","caller":"traceutil/trace.go:171","msg":"trace[1642204654] linearizableReadLoop","detail":"{readStateIndex:672; appliedIndex:671; }","duration":"154.185789ms","start":"2024-10-14T15:02:18.929184Z","end":"2024-10-14T15:02:19.083370Z","steps":["trace[1642204654] 'read index received'  (duration: 47.029692ms)","trace[1642204654] 'applied index is now lower than readState.Index'  (duration: 107.155032ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-14T15:02:19.083509Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.310259ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-201291\" ","response":"range_response_count:1 size:5537"}
	{"level":"info","ts":"2024-10-14T15:02:19.083534Z","caller":"traceutil/trace.go:171","msg":"trace[654780904] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-201291; range_end:; response_count:1; response_revision:630; }","duration":"154.345312ms","start":"2024-10-14T15:02:18.929181Z","end":"2024-10-14T15:02:19.083526Z","steps":["trace[654780904] 'agreement among raft nodes before linearized reading'  (duration: 154.229017ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T15:12:02.361411Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":889}
	{"level":"info","ts":"2024-10-14T15:12:02.379741Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":889,"took":"17.899369ms","hash":3738844866,"current-db-size-bytes":2883584,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":2883584,"current-db-size-in-use":"2.9 MB"}
	{"level":"info","ts":"2024-10-14T15:12:02.379824Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3738844866,"revision":889,"compact-revision":-1}
	{"level":"info","ts":"2024-10-14T15:17:02.370415Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1131}
	{"level":"info","ts":"2024-10-14T15:17:02.376317Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1131,"took":"4.979074ms","hash":3482251955,"current-db-size-bytes":2883584,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1667072,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-10-14T15:17:02.376460Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3482251955,"revision":1131,"compact-revision":889}
	{"level":"info","ts":"2024-10-14T15:22:02.378778Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1374}
	{"level":"info","ts":"2024-10-14T15:22:02.383261Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1374,"took":"3.739515ms","hash":2337744532,"current-db-size-bytes":2883584,"current-db-size":"2.9 MB","current-db-size-in-use-bytes":1638400,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-10-14T15:22:02.383363Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2337744532,"revision":1374,"compact-revision":1131}
	{"level":"info","ts":"2024-10-14T15:23:17.381428Z","caller":"traceutil/trace.go:171","msg":"trace[297829480] linearizableReadLoop","detail":"{readStateIndex:1981; appliedIndex:1980; }","duration":"283.39475ms","start":"2024-10-14T15:23:17.097975Z","end":"2024-10-14T15:23:17.381370Z","steps":["trace[297829480] 'read index received'  (duration: 283.259229ms)","trace[297829480] 'applied index is now lower than readState.Index'  (duration: 134.979µs)"],"step_count":2}
	{"level":"info","ts":"2024-10-14T15:23:17.381876Z","caller":"traceutil/trace.go:171","msg":"trace[1164494465] transaction","detail":"{read_only:false; response_revision:1679; number_of_response:1; }","duration":"381.403602ms","start":"2024-10-14T15:23:17.000449Z","end":"2024-10-14T15:23:17.381853Z","steps":["trace[1164494465] 'process raft request'  (duration: 380.832191ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-14T15:23:17.382212Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"255.724817ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-10-14T15:23:17.382275Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-10-14T15:23:17.000433Z","time spent":"381.515728ms","remote":"127.0.0.1:56738","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1677 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-10-14T15:23:17.382384Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"284.40228ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-14T15:23:17.382421Z","caller":"traceutil/trace.go:171","msg":"trace[702568534] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1679; }","duration":"284.443862ms","start":"2024-10-14T15:23:17.097970Z","end":"2024-10-14T15:23:17.382414Z","steps":["trace[702568534] 'agreement among raft nodes before linearized reading'  (duration: 284.356135ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T15:23:17.382309Z","caller":"traceutil/trace.go:171","msg":"trace[1177640859] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1679; }","duration":"255.878577ms","start":"2024-10-14T15:23:17.126414Z","end":"2024-10-14T15:23:17.382293Z","steps":["trace[1177640859] 'agreement among raft nodes before linearized reading'  (duration: 255.50018ms)"],"step_count":1}
	
	
	==> kernel <==
	 15:23:36 up 22 min,  0 users,  load average: 0.20, 0.24, 0.17
	Linux default-k8s-diff-port-201291 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f] <==
	I1014 15:20:04.764578       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1014 15:20:04.764626       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1014 15:22:03.763545       1 handler_proxy.go:99] no RequestInfo found in the context
	E1014 15:22:03.763905       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1014 15:22:04.765281       1 handler_proxy.go:99] no RequestInfo found in the context
	E1014 15:22:04.765340       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1014 15:22:04.765291       1 handler_proxy.go:99] no RequestInfo found in the context
	E1014 15:22:04.765427       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1014 15:22:04.766576       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1014 15:22:04.766644       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1014 15:23:04.767016       1 handler_proxy.go:99] no RequestInfo found in the context
	W1014 15:23:04.767017       1 handler_proxy.go:99] no RequestInfo found in the context
	E1014 15:23:04.767445       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1014 15:23:04.767517       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1014 15:23:04.768668       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1014 15:23:04.768802       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4] <==
	I1014 15:18:07.956512       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1014 15:18:13.003869       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="169.846µs"
	I1014 15:18:25.002836       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="96.426µs"
	E1014 15:18:37.477757       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:18:37.964070       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:19:07.485680       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:19:07.972073       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:19:37.492502       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:19:37.979376       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:20:07.499206       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:20:07.992477       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:20:37.505317       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:20:37.999952       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:21:07.512045       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:21:08.007438       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:21:37.518432       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:21:38.013768       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:22:07.525140       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:22:08.021689       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:22:37.531917       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:22:38.030530       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1014 15:22:57.651302       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-201291"
	E1014 15:23:07.537854       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:23:08.037789       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1014 15:23:24.008194       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="320.407µs"
	
	
	==> kube-proxy [8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1014 15:02:04.774347       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1014 15:02:04.839518       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.128"]
	E1014 15:02:04.840246       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 15:02:04.941234       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1014 15:02:04.941304       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 15:02:04.941336       1 server_linux.go:169] "Using iptables Proxier"
	I1014 15:02:04.966379       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 15:02:04.973423       1 server.go:483] "Version info" version="v1.31.1"
	I1014 15:02:04.973500       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 15:02:04.975755       1 config.go:105] "Starting endpoint slice config controller"
	I1014 15:02:04.981821       1 config.go:328] "Starting node config controller"
	I1014 15:02:04.981907       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 15:02:04.983189       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 15:02:04.983309       1 config.go:199] "Starting service config controller"
	I1014 15:02:04.983333       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 15:02:05.084615       1 shared_informer.go:320] Caches are synced for service config
	I1014 15:02:05.084722       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1014 15:02:05.086642       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa] <==
	I1014 15:02:01.973523       1 serving.go:386] Generated self-signed cert in-memory
	W1014 15:02:03.685310       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1014 15:02:03.685383       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1014 15:02:03.685398       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1014 15:02:03.685406       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1014 15:02:03.745689       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I1014 15:02:03.745786       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 15:02:03.749145       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1014 15:02:03.749576       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1014 15:02:03.749966       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1014 15:02:03.750790       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1014 15:02:03.850595       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 14 15:22:37 default-k8s-diff-port-201291 kubelet[913]: E1014 15:22:37.988574     913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-bcrqs" podUID="508697cd-cf31-4078-8985-5c0b77966695"
	Oct 14 15:22:40 default-k8s-diff-port-201291 kubelet[913]: E1014 15:22:40.318766     913 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919360318290594,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:22:40 default-k8s-diff-port-201291 kubelet[913]: E1014 15:22:40.319228     913 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919360318290594,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:22:48 default-k8s-diff-port-201291 kubelet[913]: E1014 15:22:48.987518     913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-bcrqs" podUID="508697cd-cf31-4078-8985-5c0b77966695"
	Oct 14 15:22:50 default-k8s-diff-port-201291 kubelet[913]: E1014 15:22:50.321538     913 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919370321170927,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:22:50 default-k8s-diff-port-201291 kubelet[913]: E1014 15:22:50.322140     913 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919370321170927,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:22:59 default-k8s-diff-port-201291 kubelet[913]: E1014 15:22:59.988520     913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-bcrqs" podUID="508697cd-cf31-4078-8985-5c0b77966695"
	Oct 14 15:23:00 default-k8s-diff-port-201291 kubelet[913]: E1014 15:23:00.017840     913 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 15:23:00 default-k8s-diff-port-201291 kubelet[913]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 15:23:00 default-k8s-diff-port-201291 kubelet[913]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 15:23:00 default-k8s-diff-port-201291 kubelet[913]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 15:23:00 default-k8s-diff-port-201291 kubelet[913]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 15:23:00 default-k8s-diff-port-201291 kubelet[913]: E1014 15:23:00.324653     913 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919380324358694,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:23:00 default-k8s-diff-port-201291 kubelet[913]: E1014 15:23:00.324704     913 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919380324358694,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:23:10 default-k8s-diff-port-201291 kubelet[913]: E1014 15:23:10.326020     913 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919390325570729,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:23:10 default-k8s-diff-port-201291 kubelet[913]: E1014 15:23:10.326542     913 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919390325570729,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:23:13 default-k8s-diff-port-201291 kubelet[913]: E1014 15:23:13.003612     913 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 14 15:23:13 default-k8s-diff-port-201291 kubelet[913]: E1014 15:23:13.003694     913 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 14 15:23:13 default-k8s-diff-port-201291 kubelet[913]: E1014 15:23:13.003887     913 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tl9fl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPr
opagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:
nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-bcrqs_kube-system(508697cd-cf31-4078-8985-5c0b77966695): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Oct 14 15:23:13 default-k8s-diff-port-201291 kubelet[913]: E1014 15:23:13.005445     913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-bcrqs" podUID="508697cd-cf31-4078-8985-5c0b77966695"
	Oct 14 15:23:20 default-k8s-diff-port-201291 kubelet[913]: E1014 15:23:20.328734     913 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919400328161022,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:23:20 default-k8s-diff-port-201291 kubelet[913]: E1014 15:23:20.329422     913 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919400328161022,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:23:23 default-k8s-diff-port-201291 kubelet[913]: E1014 15:23:23.989704     913 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-bcrqs" podUID="508697cd-cf31-4078-8985-5c0b77966695"
	Oct 14 15:23:30 default-k8s-diff-port-201291 kubelet[913]: E1014 15:23:30.330931     913 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919410330525713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:23:30 default-k8s-diff-port-201291 kubelet[913]: E1014 15:23:30.331473     913 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919410330525713,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076] <==
	I1014 15:02:04.510565       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1014 15:02:34.514449       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81] <==
	I1014 15:02:35.292622       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1014 15:02:35.310275       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1014 15:02:35.310510       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1014 15:02:52.717439       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1014 15:02:52.717769       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-201291_4291a378-32ef-499c-b603-0b1c483483cb!
	I1014 15:02:52.722257       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7cf6f0ca-8e1e-43ca-81cd-d0b61c17bc59", APIVersion:"v1", ResourceVersion:"669", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-201291_4291a378-32ef-499c-b603-0b1c483483cb became leader
	I1014 15:02:52.817947       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-201291_4291a378-32ef-499c-b603-0b1c483483cb!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-201291 -n default-k8s-diff-port-201291
E1014 15:23:36.994120   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/functional-917108/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-201291 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-bcrqs
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-201291 describe pod metrics-server-6867b74b74-bcrqs
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-201291 describe pod metrics-server-6867b74b74-bcrqs: exit status 1 (59.232851ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-bcrqs" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-201291 describe pod metrics-server-6867b74b74-bcrqs: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (478.96s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (440.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-989166 -n embed-certs-989166
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-14 15:23:13.603596879 +0000 UTC m=+6277.884945213
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-989166 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-989166 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.525µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-989166 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-989166 -n embed-certs-989166
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-989166 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-989166 logs -n 25: (1.398950876s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p                                                     | disable-driver-mounts-887610 | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | disable-driver-mounts-887610                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:55 UTC |
	|         | default-k8s-diff-port-201291                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-813300             | no-preload-813300            | jenkins | v1.34.0 | 14 Oct 24 14:54 UTC | 14 Oct 24 14:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-813300                                   | no-preload-813300            | jenkins | v1.34.0 | 14 Oct 24 14:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-989166            | embed-certs-989166           | jenkins | v1.34.0 | 14 Oct 24 14:54 UTC | 14 Oct 24 14:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-989166                                  | embed-certs-989166           | jenkins | v1.34.0 | 14 Oct 24 14:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-201291  | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:55 UTC | 14 Oct 24 14:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:55 UTC |                     |
	|         | default-k8s-diff-port-201291                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-813300                  | no-preload-813300            | jenkins | v1.34.0 | 14 Oct 24 14:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-813300                                   | no-preload-813300            | jenkins | v1.34.0 | 14 Oct 24 14:56 UTC | 14 Oct 24 15:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-399767        | old-k8s-version-399767       | jenkins | v1.34.0 | 14 Oct 24 14:56 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-989166                 | embed-certs-989166           | jenkins | v1.34.0 | 14 Oct 24 14:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-989166                                  | embed-certs-989166           | jenkins | v1.34.0 | 14 Oct 24 14:57 UTC | 14 Oct 24 15:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-201291       | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:57 UTC | 14 Oct 24 15:06 UTC |
	|         | default-k8s-diff-port-201291                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-399767                              | old-k8s-version-399767       | jenkins | v1.34.0 | 14 Oct 24 14:58 UTC | 14 Oct 24 14:58 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-399767             | old-k8s-version-399767       | jenkins | v1.34.0 | 14 Oct 24 14:58 UTC | 14 Oct 24 14:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-399767                              | old-k8s-version-399767       | jenkins | v1.34.0 | 14 Oct 24 14:58 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-399767                              | old-k8s-version-399767       | jenkins | v1.34.0 | 14 Oct 24 15:21 UTC | 14 Oct 24 15:21 UTC |
	| start   | -p newest-cni-870289 --memory=2200 --alsologtostderr   | newest-cni-870289            | jenkins | v1.34.0 | 14 Oct 24 15:21 UTC | 14 Oct 24 15:22 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-813300                                   | no-preload-813300            | jenkins | v1.34.0 | 14 Oct 24 15:22 UTC | 14 Oct 24 15:22 UTC |
	| addons  | enable metrics-server -p newest-cni-870289             | newest-cni-870289            | jenkins | v1.34.0 | 14 Oct 24 15:22 UTC | 14 Oct 24 15:22 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-870289                                   | newest-cni-870289            | jenkins | v1.34.0 | 14 Oct 24 15:22 UTC | 14 Oct 24 15:22 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-870289                  | newest-cni-870289            | jenkins | v1.34.0 | 14 Oct 24 15:22 UTC | 14 Oct 24 15:22 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-870289 --memory=2200 --alsologtostderr   | newest-cni-870289            | jenkins | v1.34.0 | 14 Oct 24 15:22 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 15:22:50
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 15:22:50.242436   80033 out.go:345] Setting OutFile to fd 1 ...
	I1014 15:22:50.242564   80033 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 15:22:50.242574   80033 out.go:358] Setting ErrFile to fd 2...
	I1014 15:22:50.242580   80033 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 15:22:50.242878   80033 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
	I1014 15:22:50.243414   80033 out.go:352] Setting JSON to false
	I1014 15:22:50.244264   80033 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7520,"bootTime":1728911850,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 15:22:50.244362   80033 start.go:139] virtualization: kvm guest
	I1014 15:22:50.246493   80033 out.go:177] * [newest-cni-870289] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1014 15:22:50.248081   80033 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 15:22:50.248114   80033 notify.go:220] Checking for updates...
	I1014 15:22:50.250426   80033 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 15:22:50.251684   80033 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 15:22:50.252816   80033 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 15:22:50.253913   80033 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 15:22:50.255087   80033 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 15:22:50.256910   80033 config.go:182] Loaded profile config "newest-cni-870289": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 15:22:50.257349   80033 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:22:50.257401   80033 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:22:50.273158   80033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42101
	I1014 15:22:50.273668   80033 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:22:50.274247   80033 main.go:141] libmachine: Using API Version  1
	I1014 15:22:50.274267   80033 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:22:50.274732   80033 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:22:50.274949   80033 main.go:141] libmachine: (newest-cni-870289) Calling .DriverName
	I1014 15:22:50.275246   80033 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 15:22:50.275664   80033 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:22:50.275741   80033 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:22:50.289988   80033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45831
	I1014 15:22:50.290297   80033 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:22:50.290752   80033 main.go:141] libmachine: Using API Version  1
	I1014 15:22:50.290775   80033 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:22:50.291064   80033 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:22:50.291255   80033 main.go:141] libmachine: (newest-cni-870289) Calling .DriverName
	I1014 15:22:50.325994   80033 out.go:177] * Using the kvm2 driver based on existing profile
	I1014 15:22:50.327400   80033 start.go:297] selected driver: kvm2
	I1014 15:22:50.327414   80033 start.go:901] validating driver "kvm2" against &{Name:newest-cni-870289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:newest-cni-870289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.98 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] Sta
rtHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 15:22:50.327507   80033 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 15:22:50.328209   80033 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 15:22:50.328312   80033 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19790-7836/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 15:22:50.343812   80033 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1014 15:22:50.344268   80033 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1014 15:22:50.344309   80033 cni.go:84] Creating CNI manager for ""
	I1014 15:22:50.344374   80033 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:22:50.344435   80033 start.go:340] cluster config:
	{Name:newest-cni-870289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-870289 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.98 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:
Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 15:22:50.344552   80033 iso.go:125] acquiring lock: {Name:mk2e2e780b05ead4007a93e6b56d28c6081926bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 15:22:50.346535   80033 out.go:177] * Starting "newest-cni-870289" primary control-plane node in "newest-cni-870289" cluster
	I1014 15:22:50.347981   80033 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 15:22:50.348025   80033 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1014 15:22:50.348036   80033 cache.go:56] Caching tarball of preloaded images
	I1014 15:22:50.348131   80033 preload.go:172] Found /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 15:22:50.348144   80033 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1014 15:22:50.348252   80033 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289/config.json ...
	I1014 15:22:50.348479   80033 start.go:360] acquireMachinesLock for newest-cni-870289: {Name:mk0b928e3a7c606d1470b2064477a22c5e59969d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 15:22:50.348534   80033 start.go:364] duration metric: took 34.27µs to acquireMachinesLock for "newest-cni-870289"
	I1014 15:22:50.348554   80033 start.go:96] Skipping create...Using existing machine configuration
	I1014 15:22:50.348563   80033 fix.go:54] fixHost starting: 
	I1014 15:22:50.348833   80033 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:22:50.348886   80033 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:22:50.363200   80033 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32819
	I1014 15:22:50.363690   80033 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:22:50.364189   80033 main.go:141] libmachine: Using API Version  1
	I1014 15:22:50.364208   80033 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:22:50.364519   80033 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:22:50.364711   80033 main.go:141] libmachine: (newest-cni-870289) Calling .DriverName
	I1014 15:22:50.364849   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetState
	I1014 15:22:50.366406   80033 fix.go:112] recreateIfNeeded on newest-cni-870289: state=Stopped err=<nil>
	I1014 15:22:50.366431   80033 main.go:141] libmachine: (newest-cni-870289) Calling .DriverName
	W1014 15:22:50.366576   80033 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 15:22:50.368610   80033 out.go:177] * Restarting existing kvm2 VM for "newest-cni-870289" ...
	I1014 15:22:50.369973   80033 main.go:141] libmachine: (newest-cni-870289) Calling .Start
	I1014 15:22:50.370176   80033 main.go:141] libmachine: (newest-cni-870289) Ensuring networks are active...
	I1014 15:22:50.371043   80033 main.go:141] libmachine: (newest-cni-870289) Ensuring network default is active
	I1014 15:22:50.371365   80033 main.go:141] libmachine: (newest-cni-870289) Ensuring network mk-newest-cni-870289 is active
	I1014 15:22:50.371756   80033 main.go:141] libmachine: (newest-cni-870289) Getting domain xml...
	I1014 15:22:50.372450   80033 main.go:141] libmachine: (newest-cni-870289) Creating domain...
	I1014 15:22:51.608832   80033 main.go:141] libmachine: (newest-cni-870289) Waiting to get IP...
	I1014 15:22:51.609871   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:51.610285   80033 main.go:141] libmachine: (newest-cni-870289) DBG | unable to find current IP address of domain newest-cni-870289 in network mk-newest-cni-870289
	I1014 15:22:51.610374   80033 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:22:51.610290   80068 retry.go:31] will retry after 225.531686ms: waiting for machine to come up
	I1014 15:22:51.837899   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:51.838389   80033 main.go:141] libmachine: (newest-cni-870289) DBG | unable to find current IP address of domain newest-cni-870289 in network mk-newest-cni-870289
	I1014 15:22:51.838413   80033 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:22:51.838337   80068 retry.go:31] will retry after 320.099873ms: waiting for machine to come up
	I1014 15:22:52.159722   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:52.160196   80033 main.go:141] libmachine: (newest-cni-870289) DBG | unable to find current IP address of domain newest-cni-870289 in network mk-newest-cni-870289
	I1014 15:22:52.160214   80033 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:22:52.160162   80068 retry.go:31] will retry after 366.320676ms: waiting for machine to come up
	I1014 15:22:52.527657   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:52.528083   80033 main.go:141] libmachine: (newest-cni-870289) DBG | unable to find current IP address of domain newest-cni-870289 in network mk-newest-cni-870289
	I1014 15:22:52.528130   80033 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:22:52.528054   80068 retry.go:31] will retry after 506.276838ms: waiting for machine to come up
	I1014 15:22:53.035693   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:53.036224   80033 main.go:141] libmachine: (newest-cni-870289) DBG | unable to find current IP address of domain newest-cni-870289 in network mk-newest-cni-870289
	I1014 15:22:53.036247   80033 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:22:53.036163   80068 retry.go:31] will retry after 601.197956ms: waiting for machine to come up
	I1014 15:22:53.638867   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:53.639380   80033 main.go:141] libmachine: (newest-cni-870289) DBG | unable to find current IP address of domain newest-cni-870289 in network mk-newest-cni-870289
	I1014 15:22:53.639405   80033 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:22:53.639336   80068 retry.go:31] will retry after 806.198335ms: waiting for machine to come up
	I1014 15:22:54.446655   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:54.447106   80033 main.go:141] libmachine: (newest-cni-870289) DBG | unable to find current IP address of domain newest-cni-870289 in network mk-newest-cni-870289
	I1014 15:22:54.447136   80033 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:22:54.447054   80068 retry.go:31] will retry after 774.90593ms: waiting for machine to come up
	I1014 15:22:55.224109   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:55.224499   80033 main.go:141] libmachine: (newest-cni-870289) DBG | unable to find current IP address of domain newest-cni-870289 in network mk-newest-cni-870289
	I1014 15:22:55.224529   80033 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:22:55.224464   80068 retry.go:31] will retry after 1.132731616s: waiting for machine to come up
	I1014 15:22:56.358972   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:56.359328   80033 main.go:141] libmachine: (newest-cni-870289) DBG | unable to find current IP address of domain newest-cni-870289 in network mk-newest-cni-870289
	I1014 15:22:56.359381   80033 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:22:56.359280   80068 retry.go:31] will retry after 1.296460105s: waiting for machine to come up
	I1014 15:22:57.657787   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:57.658226   80033 main.go:141] libmachine: (newest-cni-870289) DBG | unable to find current IP address of domain newest-cni-870289 in network mk-newest-cni-870289
	I1014 15:22:57.658249   80033 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:22:57.658187   80068 retry.go:31] will retry after 1.922384977s: waiting for machine to come up
	I1014 15:22:59.583317   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:59.583698   80033 main.go:141] libmachine: (newest-cni-870289) DBG | unable to find current IP address of domain newest-cni-870289 in network mk-newest-cni-870289
	I1014 15:22:59.583730   80033 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:22:59.583657   80068 retry.go:31] will retry after 2.451802219s: waiting for machine to come up
	I1014 15:23:02.037037   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:02.037421   80033 main.go:141] libmachine: (newest-cni-870289) DBG | unable to find current IP address of domain newest-cni-870289 in network mk-newest-cni-870289
	I1014 15:23:02.037447   80033 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:23:02.037362   80068 retry.go:31] will retry after 3.287657218s: waiting for machine to come up
	I1014 15:23:05.328784   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:05.329242   80033 main.go:141] libmachine: (newest-cni-870289) DBG | unable to find current IP address of domain newest-cni-870289 in network mk-newest-cni-870289
	I1014 15:23:05.329268   80033 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:23:05.329212   80068 retry.go:31] will retry after 3.443295733s: waiting for machine to come up
	I1014 15:23:08.776298   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:08.776859   80033 main.go:141] libmachine: (newest-cni-870289) Found IP for machine: 192.168.72.98
	I1014 15:23:08.776889   80033 main.go:141] libmachine: (newest-cni-870289) Reserving static IP address...
	I1014 15:23:08.776904   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has current primary IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:08.777369   80033 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "newest-cni-870289", mac: "52:54:00:7d:a1:9e", ip: "192.168.72.98"} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:23:01 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:23:08.777411   80033 main.go:141] libmachine: (newest-cni-870289) Reserved static IP address: 192.168.72.98
	I1014 15:23:08.777429   80033 main.go:141] libmachine: (newest-cni-870289) DBG | skip adding static IP to network mk-newest-cni-870289 - found existing host DHCP lease matching {name: "newest-cni-870289", mac: "52:54:00:7d:a1:9e", ip: "192.168.72.98"}
	I1014 15:23:08.777438   80033 main.go:141] libmachine: (newest-cni-870289) Waiting for SSH to be available...
	I1014 15:23:08.777447   80033 main.go:141] libmachine: (newest-cni-870289) DBG | Getting to WaitForSSH function...
	I1014 15:23:08.779826   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:08.780226   80033 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:23:01 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:23:08.780255   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:08.780369   80033 main.go:141] libmachine: (newest-cni-870289) DBG | Using SSH client type: external
	I1014 15:23:08.780420   80033 main.go:141] libmachine: (newest-cni-870289) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/newest-cni-870289/id_rsa (-rw-------)
	I1014 15:23:08.780449   80033 main.go:141] libmachine: (newest-cni-870289) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.98 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/newest-cni-870289/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 15:23:08.780469   80033 main.go:141] libmachine: (newest-cni-870289) DBG | About to run SSH command:
	I1014 15:23:08.780487   80033 main.go:141] libmachine: (newest-cni-870289) DBG | exit 0
	I1014 15:23:08.906756   80033 main.go:141] libmachine: (newest-cni-870289) DBG | SSH cmd err, output: <nil>: 
	I1014 15:23:08.907078   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetConfigRaw
	I1014 15:23:08.907857   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetIP
	I1014 15:23:08.910201   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:08.910565   80033 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:23:01 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:23:08.910592   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:08.910798   80033 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289/config.json ...
	I1014 15:23:08.910967   80033 machine.go:93] provisionDockerMachine start ...
	I1014 15:23:08.910983   80033 main.go:141] libmachine: (newest-cni-870289) Calling .DriverName
	I1014 15:23:08.911192   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHHostname
	I1014 15:23:08.913226   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:08.913551   80033 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:23:01 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:23:08.913578   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:08.913711   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHPort
	I1014 15:23:08.913857   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:23:08.913966   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:23:08.914084   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHUsername
	I1014 15:23:08.914249   80033 main.go:141] libmachine: Using SSH client type: native
	I1014 15:23:08.914423   80033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.98 22 <nil> <nil>}
	I1014 15:23:08.914433   80033 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 15:23:09.027295   80033 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 15:23:09.027321   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetMachineName
	I1014 15:23:09.027600   80033 buildroot.go:166] provisioning hostname "newest-cni-870289"
	I1014 15:23:09.027626   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetMachineName
	I1014 15:23:09.027830   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHHostname
	I1014 15:23:09.030655   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:09.031085   80033 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:23:01 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:23:09.031122   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:09.031278   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHPort
	I1014 15:23:09.031472   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:23:09.031619   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:23:09.031752   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHUsername
	I1014 15:23:09.031881   80033 main.go:141] libmachine: Using SSH client type: native
	I1014 15:23:09.032076   80033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.98 22 <nil> <nil>}
	I1014 15:23:09.032089   80033 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-870289 && echo "newest-cni-870289" | sudo tee /etc/hostname
	I1014 15:23:09.157584   80033 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-870289
	
	I1014 15:23:09.157624   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHHostname
	I1014 15:23:09.160430   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:09.160774   80033 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:23:01 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:23:09.160806   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:09.160981   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHPort
	I1014 15:23:09.161137   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:23:09.161302   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:23:09.161462   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHUsername
	I1014 15:23:09.161633   80033 main.go:141] libmachine: Using SSH client type: native
	I1014 15:23:09.161863   80033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.98 22 <nil> <nil>}
	I1014 15:23:09.161888   80033 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-870289' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-870289/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-870289' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 15:23:09.284868   80033 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 15:23:09.284902   80033 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 15:23:09.284979   80033 buildroot.go:174] setting up certificates
	I1014 15:23:09.284995   80033 provision.go:84] configureAuth start
	I1014 15:23:09.285018   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetMachineName
	I1014 15:23:09.285294   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetIP
	I1014 15:23:09.287889   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:09.288168   80033 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:23:01 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:23:09.288202   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:09.288382   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHHostname
	I1014 15:23:09.290615   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:09.290796   80033 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:23:01 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:23:09.290817   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:09.290997   80033 provision.go:143] copyHostCerts
	I1014 15:23:09.291069   80033 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 15:23:09.291111   80033 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 15:23:09.291208   80033 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 15:23:09.291355   80033 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 15:23:09.291369   80033 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 15:23:09.291417   80033 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 15:23:09.291542   80033 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 15:23:09.291552   80033 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 15:23:09.291593   80033 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 15:23:09.291691   80033 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.newest-cni-870289 san=[127.0.0.1 192.168.72.98 localhost minikube newest-cni-870289]
	I1014 15:23:09.713207   80033 provision.go:177] copyRemoteCerts
	I1014 15:23:09.713270   80033 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 15:23:09.713298   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHHostname
	I1014 15:23:09.716175   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:09.716552   80033 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:23:01 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:23:09.716585   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:09.716750   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHPort
	I1014 15:23:09.716946   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:23:09.717090   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHUsername
	I1014 15:23:09.717202   80033 sshutil.go:53] new ssh client: &{IP:192.168.72.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/newest-cni-870289/id_rsa Username:docker}
	I1014 15:23:09.805363   80033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 15:23:09.831107   80033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1014 15:23:09.856337   80033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 15:23:09.881208   80033 provision.go:87] duration metric: took 596.195147ms to configureAuth
	I1014 15:23:09.881238   80033 buildroot.go:189] setting minikube options for container-runtime
	I1014 15:23:09.881466   80033 config.go:182] Loaded profile config "newest-cni-870289": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 15:23:09.881570   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHHostname
	I1014 15:23:09.884576   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:09.884921   80033 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:23:01 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:23:09.884951   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:09.885133   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHPort
	I1014 15:23:09.885365   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:23:09.885553   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:23:09.885775   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHUsername
	I1014 15:23:09.885972   80033 main.go:141] libmachine: Using SSH client type: native
	I1014 15:23:09.886148   80033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.98 22 <nil> <nil>}
	I1014 15:23:09.886162   80033 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 15:23:10.123349   80033 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 15:23:10.123391   80033 machine.go:96] duration metric: took 1.212412253s to provisionDockerMachine
	I1014 15:23:10.123406   80033 start.go:293] postStartSetup for "newest-cni-870289" (driver="kvm2")
	I1014 15:23:10.123419   80033 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 15:23:10.123440   80033 main.go:141] libmachine: (newest-cni-870289) Calling .DriverName
	I1014 15:23:10.123764   80033 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 15:23:10.123808   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHHostname
	I1014 15:23:10.126259   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:10.126680   80033 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:23:01 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:23:10.126711   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:10.126852   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHPort
	I1014 15:23:10.127033   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:23:10.127276   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHUsername
	I1014 15:23:10.127506   80033 sshutil.go:53] new ssh client: &{IP:192.168.72.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/newest-cni-870289/id_rsa Username:docker}
	I1014 15:23:10.214103   80033 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 15:23:10.218961   80033 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 15:23:10.218990   80033 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 15:23:10.219057   80033 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 15:23:10.219144   80033 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 15:23:10.219266   80033 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 15:23:10.230319   80033 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:23:10.257243   80033 start.go:296] duration metric: took 133.82151ms for postStartSetup
	I1014 15:23:10.257289   80033 fix.go:56] duration metric: took 19.908725044s for fixHost
	I1014 15:23:10.257313   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHHostname
	I1014 15:23:10.259886   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:10.260410   80033 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:23:01 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:23:10.260443   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:10.260658   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHPort
	I1014 15:23:10.260830   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:23:10.260980   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:23:10.261082   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHUsername
	I1014 15:23:10.261279   80033 main.go:141] libmachine: Using SSH client type: native
	I1014 15:23:10.261488   80033 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.98 22 <nil> <nil>}
	I1014 15:23:10.261503   80033 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 15:23:10.375678   80033 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728919390.332074383
	
	I1014 15:23:10.375703   80033 fix.go:216] guest clock: 1728919390.332074383
	I1014 15:23:10.375712   80033 fix.go:229] Guest: 2024-10-14 15:23:10.332074383 +0000 UTC Remote: 2024-10-14 15:23:10.257294315 +0000 UTC m=+20.053667264 (delta=74.780068ms)
	I1014 15:23:10.375737   80033 fix.go:200] guest clock delta is within tolerance: 74.780068ms
	I1014 15:23:10.375744   80033 start.go:83] releasing machines lock for "newest-cni-870289", held for 20.027197193s
	I1014 15:23:10.375769   80033 main.go:141] libmachine: (newest-cni-870289) Calling .DriverName
	I1014 15:23:10.376026   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetIP
	I1014 15:23:10.378718   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:10.379157   80033 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:23:01 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:23:10.379189   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:10.379361   80033 main.go:141] libmachine: (newest-cni-870289) Calling .DriverName
	I1014 15:23:10.379883   80033 main.go:141] libmachine: (newest-cni-870289) Calling .DriverName
	I1014 15:23:10.380069   80033 main.go:141] libmachine: (newest-cni-870289) Calling .DriverName
	I1014 15:23:10.380139   80033 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 15:23:10.380189   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHHostname
	I1014 15:23:10.380298   80033 ssh_runner.go:195] Run: cat /version.json
	I1014 15:23:10.380319   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHHostname
	I1014 15:23:10.382926   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:10.383042   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:10.383332   80033 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:23:01 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:23:10.383357   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:10.383427   80033 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:23:01 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:23:10.383449   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:10.383509   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHPort
	I1014 15:23:10.383686   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHPort
	I1014 15:23:10.383701   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:23:10.383823   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:23:10.383999   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHUsername
	I1014 15:23:10.384011   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHUsername
	I1014 15:23:10.384129   80033 sshutil.go:53] new ssh client: &{IP:192.168.72.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/newest-cni-870289/id_rsa Username:docker}
	I1014 15:23:10.384164   80033 sshutil.go:53] new ssh client: &{IP:192.168.72.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/newest-cni-870289/id_rsa Username:docker}
	I1014 15:23:10.488006   80033 ssh_runner.go:195] Run: systemctl --version
	I1014 15:23:10.494280   80033 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 15:23:10.647105   80033 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 15:23:10.653807   80033 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 15:23:10.653885   80033 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 15:23:10.670664   80033 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 15:23:10.670703   80033 start.go:495] detecting cgroup driver to use...
	I1014 15:23:10.670771   80033 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 15:23:10.687355   80033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 15:23:10.702155   80033 docker.go:217] disabling cri-docker service (if available) ...
	I1014 15:23:10.702214   80033 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 15:23:10.717420   80033 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 15:23:10.733842   80033 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 15:23:10.851005   80033 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 15:23:10.997057   80033 docker.go:233] disabling docker service ...
	I1014 15:23:10.997132   80033 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 15:23:11.013139   80033 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 15:23:11.026490   80033 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 15:23:11.168341   80033 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 15:23:11.299111   80033 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 15:23:11.313239   80033 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 15:23:11.333046   80033 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 15:23:11.333116   80033 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:23:11.344125   80033 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 15:23:11.344197   80033 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:23:11.355784   80033 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:23:11.367551   80033 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:23:11.379112   80033 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 15:23:11.390535   80033 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:23:11.401644   80033 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:23:11.419775   80033 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:23:11.431426   80033 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 15:23:11.444870   80033 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 15:23:11.444933   80033 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 15:23:11.464849   80033 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 15:23:11.478475   80033 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:23:11.609186   80033 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 15:23:11.698219   80033 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 15:23:11.698306   80033 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 15:23:11.704195   80033 start.go:563] Will wait 60s for crictl version
	I1014 15:23:11.704251   80033 ssh_runner.go:195] Run: which crictl
	I1014 15:23:11.708221   80033 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 15:23:11.748056   80033 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 15:23:11.748164   80033 ssh_runner.go:195] Run: crio --version
	I1014 15:23:11.775490   80033 ssh_runner.go:195] Run: crio --version
	I1014 15:23:11.807450   80033 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1014 15:23:11.808708   80033 main.go:141] libmachine: (newest-cni-870289) Calling .GetIP
	I1014 15:23:11.811426   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:11.811929   80033 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:23:01 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:23:11.811972   80033 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:23:11.812255   80033 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1014 15:23:11.816615   80033 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:23:11.831666   80033 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	
	
	==> CRI-O <==
	Oct 14 15:23:14 embed-certs-989166 crio[711]: time="2024-10-14 15:23:14.248237847Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919394248212531,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bf4069b4-048f-4525-a785-bac6afcb5546 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:23:14 embed-certs-989166 crio[711]: time="2024-10-14 15:23:14.249060268Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cc18c918-e498-45c2-a474-ee0238dd23f6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:23:14 embed-certs-989166 crio[711]: time="2024-10-14 15:23:14.249120703Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cc18c918-e498-45c2-a474-ee0238dd23f6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:23:14 embed-certs-989166 crio[711]: time="2024-10-14 15:23:14.249298116Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fdcf89c5b91436e10c03dbc9fea768588d72a4997a958dd457c29075913fe20f,PodSandboxId:90a8fa5d83794cb27125b318c759daaecb493b4ead0cf6a8bceeab524e8bbdb7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728918401741539738,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad6caa59-bc75-4e8f-8052-86d963b92fe3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1596c06b1cc7d137360ae0461eb1800bb226dafeac56ad335816086cb1ff677,PodSandboxId:8ab2c3a2539215c9a1236476d313b2f87b1053d14c11a9cbc8b3a5cd286b2498,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728918400814926802,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6bmwg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cf9ad75-b75b-4cce-aad8-d68a810a5d0a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:881e4e8d79988127a3a53dd208777ee743be837a58404a3cc6ad00d4fbd4ce79,PodSandboxId:b2a087c3065ef67a69b7464a6da796f47042581b1fb803f3a3382a2e9492d729,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728918400860509598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-l95hj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
563de05-ef49-4fa9-bf0b-a826fbc8bb14,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ec492cd0941d666c3ab3edb5bf6b3195aa58f63059016eab320a7e64fccf2f3,PodSandboxId:b6c06b464ea07d78a2c6d0a74f164f2dafe318e618ceec87ba251c61b87c97cf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728918400063608430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g572s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d2e4a08-5d05-48ab-8fbe-3bb3fe2f77ab,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ee6b6b51cbe1c774c0c3ed13264f30b18e97a4de91dacc050b5f6f8ee5d1702,PodSandboxId:f72862ad45faa6095c364339887da6e857411344efc5129ebd87770b2c794175,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728918389132304370,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bad6ed702edb980f9ab495bd0c87ec1e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41c5829ee86da69a6477dc7cb46fb180d63ec89af4631feda9ec441fd74a9381,PodSandboxId:dc62fbba3d604cbc5300b0387a7c15263d341eb2a5c97f34f4ab28ccab3cc7d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728918389143660220,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ac42a1687ced5a6942f248383d04a7c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea51e3357f925bf10bb4111435c3aecfbc82841d01ceab7c2dc8a43cf4f11b2c,PodSandboxId:3ca4cc5b4ea3021eefda14bbaf610856eab354503e98a9fa4753bb72c36d5d68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728918389065825328,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ff6f2bfff2c52f6a606532fcbf27dc,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d04e68f07c0c2c838dc7a38787a9617099bdacc038927b4f03202edbdca0769,PodSandboxId:41897ecafacb5ac253b2dab27beb6a84d331d86052346a587246f988f21e9d57,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728918389037557389,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151746586ecdf42f597979a13a5b43e9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b8ad44dccb258b853ffd70972b32085b80373855ebcf3cba7280b4c90abdb80,PodSandboxId:65a1ca161721b36a5cd15eb9f83602d7bb104fb1f24c48ca114f43d02f79148b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728918101459157703,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151746586ecdf42f597979a13a5b43e9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cc18c918-e498-45c2-a474-ee0238dd23f6 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:23:14 embed-certs-989166 crio[711]: time="2024-10-14 15:23:14.297629097Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4501471c-d690-4690-8838-80783d0cee76 name=/runtime.v1.RuntimeService/Version
	Oct 14 15:23:14 embed-certs-989166 crio[711]: time="2024-10-14 15:23:14.297719460Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4501471c-d690-4690-8838-80783d0cee76 name=/runtime.v1.RuntimeService/Version
	Oct 14 15:23:14 embed-certs-989166 crio[711]: time="2024-10-14 15:23:14.299490368Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a3139c01-afa9-4991-bae8-afef5737a6bd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:23:14 embed-certs-989166 crio[711]: time="2024-10-14 15:23:14.300348574Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919394300306383,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a3139c01-afa9-4991-bae8-afef5737a6bd name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:23:14 embed-certs-989166 crio[711]: time="2024-10-14 15:23:14.301303498Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5da37328-800c-49fa-a962-00b34cdae2b8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:23:14 embed-certs-989166 crio[711]: time="2024-10-14 15:23:14.301373015Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5da37328-800c-49fa-a962-00b34cdae2b8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:23:14 embed-certs-989166 crio[711]: time="2024-10-14 15:23:14.301718071Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fdcf89c5b91436e10c03dbc9fea768588d72a4997a958dd457c29075913fe20f,PodSandboxId:90a8fa5d83794cb27125b318c759daaecb493b4ead0cf6a8bceeab524e8bbdb7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728918401741539738,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad6caa59-bc75-4e8f-8052-86d963b92fe3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1596c06b1cc7d137360ae0461eb1800bb226dafeac56ad335816086cb1ff677,PodSandboxId:8ab2c3a2539215c9a1236476d313b2f87b1053d14c11a9cbc8b3a5cd286b2498,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728918400814926802,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6bmwg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cf9ad75-b75b-4cce-aad8-d68a810a5d0a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:881e4e8d79988127a3a53dd208777ee743be837a58404a3cc6ad00d4fbd4ce79,PodSandboxId:b2a087c3065ef67a69b7464a6da796f47042581b1fb803f3a3382a2e9492d729,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728918400860509598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-l95hj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
563de05-ef49-4fa9-bf0b-a826fbc8bb14,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ec492cd0941d666c3ab3edb5bf6b3195aa58f63059016eab320a7e64fccf2f3,PodSandboxId:b6c06b464ea07d78a2c6d0a74f164f2dafe318e618ceec87ba251c61b87c97cf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728918400063608430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g572s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d2e4a08-5d05-48ab-8fbe-3bb3fe2f77ab,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ee6b6b51cbe1c774c0c3ed13264f30b18e97a4de91dacc050b5f6f8ee5d1702,PodSandboxId:f72862ad45faa6095c364339887da6e857411344efc5129ebd87770b2c794175,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728918389132304370,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bad6ed702edb980f9ab495bd0c87ec1e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41c5829ee86da69a6477dc7cb46fb180d63ec89af4631feda9ec441fd74a9381,PodSandboxId:dc62fbba3d604cbc5300b0387a7c15263d341eb2a5c97f34f4ab28ccab3cc7d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728918389143660220,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ac42a1687ced5a6942f248383d04a7c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea51e3357f925bf10bb4111435c3aecfbc82841d01ceab7c2dc8a43cf4f11b2c,PodSandboxId:3ca4cc5b4ea3021eefda14bbaf610856eab354503e98a9fa4753bb72c36d5d68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728918389065825328,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ff6f2bfff2c52f6a606532fcbf27dc,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d04e68f07c0c2c838dc7a38787a9617099bdacc038927b4f03202edbdca0769,PodSandboxId:41897ecafacb5ac253b2dab27beb6a84d331d86052346a587246f988f21e9d57,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728918389037557389,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151746586ecdf42f597979a13a5b43e9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b8ad44dccb258b853ffd70972b32085b80373855ebcf3cba7280b4c90abdb80,PodSandboxId:65a1ca161721b36a5cd15eb9f83602d7bb104fb1f24c48ca114f43d02f79148b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728918101459157703,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151746586ecdf42f597979a13a5b43e9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5da37328-800c-49fa-a962-00b34cdae2b8 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:23:14 embed-certs-989166 crio[711]: time="2024-10-14 15:23:14.345619373Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a75acff1-9c3f-4944-b153-3733bdd2a936 name=/runtime.v1.RuntimeService/Version
	Oct 14 15:23:14 embed-certs-989166 crio[711]: time="2024-10-14 15:23:14.345738178Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a75acff1-9c3f-4944-b153-3733bdd2a936 name=/runtime.v1.RuntimeService/Version
	Oct 14 15:23:14 embed-certs-989166 crio[711]: time="2024-10-14 15:23:14.347113593Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a391dfed-133e-47cd-a842-1d01855362bc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:23:14 embed-certs-989166 crio[711]: time="2024-10-14 15:23:14.348364193Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919394348325844,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a391dfed-133e-47cd-a842-1d01855362bc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:23:14 embed-certs-989166 crio[711]: time="2024-10-14 15:23:14.349357216Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c1d19342-0e3f-47c3-a65d-e1de220c7980 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:23:14 embed-certs-989166 crio[711]: time="2024-10-14 15:23:14.349404674Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c1d19342-0e3f-47c3-a65d-e1de220c7980 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:23:14 embed-certs-989166 crio[711]: time="2024-10-14 15:23:14.350002947Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fdcf89c5b91436e10c03dbc9fea768588d72a4997a958dd457c29075913fe20f,PodSandboxId:90a8fa5d83794cb27125b318c759daaecb493b4ead0cf6a8bceeab524e8bbdb7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728918401741539738,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad6caa59-bc75-4e8f-8052-86d963b92fe3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1596c06b1cc7d137360ae0461eb1800bb226dafeac56ad335816086cb1ff677,PodSandboxId:8ab2c3a2539215c9a1236476d313b2f87b1053d14c11a9cbc8b3a5cd286b2498,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728918400814926802,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6bmwg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cf9ad75-b75b-4cce-aad8-d68a810a5d0a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:881e4e8d79988127a3a53dd208777ee743be837a58404a3cc6ad00d4fbd4ce79,PodSandboxId:b2a087c3065ef67a69b7464a6da796f47042581b1fb803f3a3382a2e9492d729,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728918400860509598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-l95hj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
563de05-ef49-4fa9-bf0b-a826fbc8bb14,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ec492cd0941d666c3ab3edb5bf6b3195aa58f63059016eab320a7e64fccf2f3,PodSandboxId:b6c06b464ea07d78a2c6d0a74f164f2dafe318e618ceec87ba251c61b87c97cf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728918400063608430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g572s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d2e4a08-5d05-48ab-8fbe-3bb3fe2f77ab,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ee6b6b51cbe1c774c0c3ed13264f30b18e97a4de91dacc050b5f6f8ee5d1702,PodSandboxId:f72862ad45faa6095c364339887da6e857411344efc5129ebd87770b2c794175,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728918389132304370,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bad6ed702edb980f9ab495bd0c87ec1e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41c5829ee86da69a6477dc7cb46fb180d63ec89af4631feda9ec441fd74a9381,PodSandboxId:dc62fbba3d604cbc5300b0387a7c15263d341eb2a5c97f34f4ab28ccab3cc7d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728918389143660220,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ac42a1687ced5a6942f248383d04a7c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea51e3357f925bf10bb4111435c3aecfbc82841d01ceab7c2dc8a43cf4f11b2c,PodSandboxId:3ca4cc5b4ea3021eefda14bbaf610856eab354503e98a9fa4753bb72c36d5d68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728918389065825328,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ff6f2bfff2c52f6a606532fcbf27dc,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d04e68f07c0c2c838dc7a38787a9617099bdacc038927b4f03202edbdca0769,PodSandboxId:41897ecafacb5ac253b2dab27beb6a84d331d86052346a587246f988f21e9d57,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728918389037557389,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151746586ecdf42f597979a13a5b43e9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b8ad44dccb258b853ffd70972b32085b80373855ebcf3cba7280b4c90abdb80,PodSandboxId:65a1ca161721b36a5cd15eb9f83602d7bb104fb1f24c48ca114f43d02f79148b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728918101459157703,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151746586ecdf42f597979a13a5b43e9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c1d19342-0e3f-47c3-a65d-e1de220c7980 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:23:14 embed-certs-989166 crio[711]: time="2024-10-14 15:23:14.388168939Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f8d657e5-9a7b-4a36-8ea5-5fc6efebee24 name=/runtime.v1.RuntimeService/Version
	Oct 14 15:23:14 embed-certs-989166 crio[711]: time="2024-10-14 15:23:14.388256970Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f8d657e5-9a7b-4a36-8ea5-5fc6efebee24 name=/runtime.v1.RuntimeService/Version
	Oct 14 15:23:14 embed-certs-989166 crio[711]: time="2024-10-14 15:23:14.389454208Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=52864e87-56f2-427f-89fe-b86332e43894 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:23:14 embed-certs-989166 crio[711]: time="2024-10-14 15:23:14.389912495Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919394389825960,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=52864e87-56f2-427f-89fe-b86332e43894 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:23:14 embed-certs-989166 crio[711]: time="2024-10-14 15:23:14.390481339Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4f43984b-c6bb-4a01-9f1b-453a83e6a584 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:23:14 embed-certs-989166 crio[711]: time="2024-10-14 15:23:14.390550435Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4f43984b-c6bb-4a01-9f1b-453a83e6a584 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:23:14 embed-certs-989166 crio[711]: time="2024-10-14 15:23:14.390777556Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:fdcf89c5b91436e10c03dbc9fea768588d72a4997a958dd457c29075913fe20f,PodSandboxId:90a8fa5d83794cb27125b318c759daaecb493b4ead0cf6a8bceeab524e8bbdb7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728918401741539738,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad6caa59-bc75-4e8f-8052-86d963b92fe3,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f1596c06b1cc7d137360ae0461eb1800bb226dafeac56ad335816086cb1ff677,PodSandboxId:8ab2c3a2539215c9a1236476d313b2f87b1053d14c11a9cbc8b3a5cd286b2498,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728918400814926802,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6bmwg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cf9ad75-b75b-4cce-aad8-d68a810a5d0a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:881e4e8d79988127a3a53dd208777ee743be837a58404a3cc6ad00d4fbd4ce79,PodSandboxId:b2a087c3065ef67a69b7464a6da796f47042581b1fb803f3a3382a2e9492d729,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728918400860509598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-l95hj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
563de05-ef49-4fa9-bf0b-a826fbc8bb14,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ec492cd0941d666c3ab3edb5bf6b3195aa58f63059016eab320a7e64fccf2f3,PodSandboxId:b6c06b464ea07d78a2c6d0a74f164f2dafe318e618ceec87ba251c61b87c97cf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt
:1728918400063608430,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-g572s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d2e4a08-5d05-48ab-8fbe-3bb3fe2f77ab,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8ee6b6b51cbe1c774c0c3ed13264f30b18e97a4de91dacc050b5f6f8ee5d1702,PodSandboxId:f72862ad45faa6095c364339887da6e857411344efc5129ebd87770b2c794175,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728918389132304370,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bad6ed702edb980f9ab495bd0c87ec1e,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41c5829ee86da69a6477dc7cb46fb180d63ec89af4631feda9ec441fd74a9381,PodSandboxId:dc62fbba3d604cbc5300b0387a7c15263d341eb2a5c97f34f4ab28ccab3cc7d7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728918389143660220,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ac42a1687ced5a6942f248383d04a7c,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea51e3357f925bf10bb4111435c3aecfbc82841d01ceab7c2dc8a43cf4f11b2c,PodSandboxId:3ca4cc5b4ea3021eefda14bbaf610856eab354503e98a9fa4753bb72c36d5d68,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728918389065825328,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9ff6f2bfff2c52f6a606532fcbf27dc,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4d04e68f07c0c2c838dc7a38787a9617099bdacc038927b4f03202edbdca0769,PodSandboxId:41897ecafacb5ac253b2dab27beb6a84d331d86052346a587246f988f21e9d57,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:1728918389037557389,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151746586ecdf42f597979a13a5b43e9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b8ad44dccb258b853ffd70972b32085b80373855ebcf3cba7280b4c90abdb80,PodSandboxId:65a1ca161721b36a5cd15eb9f83602d7bb104fb1f24c48ca114f43d02f79148b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728918101459157703,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-989166,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 151746586ecdf42f597979a13a5b43e9,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4f43984b-c6bb-4a01-9f1b-453a83e6a584 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	fdcf89c5b9143       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   90a8fa5d83794       storage-provisioner
	881e4e8d79988       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   b2a087c3065ef       coredns-7c65d6cfc9-l95hj
	f1596c06b1cc7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   16 minutes ago      Running             coredns                   0                   8ab2c3a253921       coredns-7c65d6cfc9-6bmwg
	9ec492cd0941d       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   16 minutes ago      Running             kube-proxy                0                   b6c06b464ea07       kube-proxy-g572s
	41c5829ee86da       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   16 minutes ago      Running             kube-scheduler            2                   dc62fbba3d604       kube-scheduler-embed-certs-989166
	8ee6b6b51cbe1       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   16 minutes ago      Running             etcd                      2                   f72862ad45faa       etcd-embed-certs-989166
	ea51e3357f925       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   16 minutes ago      Running             kube-controller-manager   2                   3ca4cc5b4ea30       kube-controller-manager-embed-certs-989166
	4d04e68f07c0c       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   16 minutes ago      Running             kube-apiserver            2                   41897ecafacb5       kube-apiserver-embed-certs-989166
	0b8ad44dccb25       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   21 minutes ago      Exited              kube-apiserver            1                   65a1ca161721b       kube-apiserver-embed-certs-989166
	
	
	==> coredns [881e4e8d79988127a3a53dd208777ee743be837a58404a3cc6ad00d4fbd4ce79] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [f1596c06b1cc7d137360ae0461eb1800bb226dafeac56ad335816086cb1ff677] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-989166
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-989166
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=embed-certs-989166
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_14T15_06_35_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 15:06:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-989166
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 15:23:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 15:22:04 +0000   Mon, 14 Oct 2024 15:06:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 15:22:04 +0000   Mon, 14 Oct 2024 15:06:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 15:22:04 +0000   Mon, 14 Oct 2024 15:06:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 15:22:04 +0000   Mon, 14 Oct 2024 15:06:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.41
	  Hostname:    embed-certs-989166
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 9bb0f0ffa8f04dc1b7be39d4d45995f7
	  System UUID:                9bb0f0ff-a8f0-4dc1-b7be-39d4d45995f7
	  Boot ID:                    71741bef-62d9-4a2a-8633-17b06b62bf73
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-6bmwg                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-7c65d6cfc9-l95hj                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-embed-certs-989166                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kube-apiserver-embed-certs-989166             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-embed-certs-989166    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-g572s                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-embed-certs-989166             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 metrics-server-6867b74b74-jl6pp               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         16m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 16m   kube-proxy       
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node embed-certs-989166 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node embed-certs-989166 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node embed-certs-989166 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m   node-controller  Node embed-certs-989166 event: Registered Node embed-certs-989166 in Controller
	
	
	==> dmesg <==
	[  +0.051045] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039978] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.850488] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.479916] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.586706] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.250268] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.059214] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.056814] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.169081] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.137500] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.294875] systemd-fstab-generator[702]: Ignoring "noauto" option for root device
	[  +4.134310] systemd-fstab-generator[794]: Ignoring "noauto" option for root device
	[  +2.222006] systemd-fstab-generator[916]: Ignoring "noauto" option for root device
	[  +0.058470] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.574120] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.945173] kauditd_printk_skb: 87 callbacks suppressed
	[Oct14 15:06] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.695371] systemd-fstab-generator[2539]: Ignoring "noauto" option for root device
	[  +4.628169] kauditd_printk_skb: 56 callbacks suppressed
	[  +1.949286] systemd-fstab-generator[2861]: Ignoring "noauto" option for root device
	[  +5.507569] systemd-fstab-generator[3009]: Ignoring "noauto" option for root device
	[  +0.061353] kauditd_printk_skb: 14 callbacks suppressed
	[  +8.992893] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [8ee6b6b51cbe1c774c0c3ed13264f30b18e97a4de91dacc050b5f6f8ee5d1702] <==
	{"level":"info","ts":"2024-10-14T15:06:30.279677Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 received MsgPreVoteResp from 903e0dada8362847 at term 1"}
	{"level":"info","ts":"2024-10-14T15:06:30.279732Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 became candidate at term 2"}
	{"level":"info","ts":"2024-10-14T15:06:30.279757Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 received MsgVoteResp from 903e0dada8362847 at term 2"}
	{"level":"info","ts":"2024-10-14T15:06:30.279822Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"903e0dada8362847 became leader at term 2"}
	{"level":"info","ts":"2024-10-14T15:06:30.279847Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 903e0dada8362847 elected leader 903e0dada8362847 at term 2"}
	{"level":"info","ts":"2024-10-14T15:06:30.283367Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T15:06:30.286168Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"903e0dada8362847","local-member-attributes":"{Name:embed-certs-989166 ClientURLs:[https://192.168.39.41:2379]}","request-path":"/0/members/903e0dada8362847/attributes","cluster-id":"b5cacf25c2f2940e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-14T15:06:30.286936Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-14T15:06:30.287465Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-14T15:06:30.288690Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-14T15:06:30.288841Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-14T15:06:30.290943Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-14T15:06:30.291418Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-14T15:06:30.294260Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-14T15:06:30.297191Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.41:2379"}
	{"level":"info","ts":"2024-10-14T15:06:30.297594Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b5cacf25c2f2940e","local-member-id":"903e0dada8362847","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T15:06:30.297697Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T15:06:30.297748Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T15:16:30.348589Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":682}
	{"level":"info","ts":"2024-10-14T15:16:30.359655Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":682,"took":"10.192585ms","hash":1316446561,"current-db-size-bytes":2326528,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2326528,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-10-14T15:16:30.359755Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1316446561,"revision":682,"compact-revision":-1}
	{"level":"info","ts":"2024-10-14T15:21:30.355265Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":925}
	{"level":"info","ts":"2024-10-14T15:21:30.359723Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":925,"took":"3.716618ms","hash":3170450120,"current-db-size-bytes":2326528,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1609728,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-10-14T15:21:30.359822Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3170450120,"revision":925,"compact-revision":682}
	{"level":"info","ts":"2024-10-14T15:22:27.105055Z","caller":"traceutil/trace.go:171","msg":"trace[54479055] transaction","detail":"{read_only:false; response_revision:1217; number_of_response:1; }","duration":"119.853291ms","start":"2024-10-14T15:22:26.985139Z","end":"2024-10-14T15:22:27.104993Z","steps":["trace[54479055] 'process raft request'  (duration: 119.385926ms)"],"step_count":1}
	
	
	==> kernel <==
	 15:23:14 up 21 min,  0 users,  load average: 0.03, 0.08, 0.09
	Linux embed-certs-989166 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0b8ad44dccb258b853ffd70972b32085b80373855ebcf3cba7280b4c90abdb80] <==
	W1014 15:06:21.452329       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:21.452445       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:21.459210       1 logging.go:55] [core] [Channel #76 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:21.628230       1 logging.go:55] [core] [Channel #100 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:21.640036       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:21.664313       1 logging.go:55] [core] [Channel #88 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:21.669747       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:21.729339       1 logging.go:55] [core] [Channel #55 SubChannel #56]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:21.763175       1 logging.go:55] [core] [Channel #73 SubChannel #74]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:21.777747       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:21.839757       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:21.893159       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:21.916629       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:21.992741       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:22.090553       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:22.123522       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:22.168400       1 logging.go:55] [core] [Channel #46 SubChannel #47]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:22.443446       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:22.451096       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:22.558556       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:23.674507       1 logging.go:55] [core] [Channel #208 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:24.675488       1 logging.go:55] [core] [Channel #208 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:25.672322       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:25.727618       1 logging.go:55] [core] [Channel #142 SubChannel #143]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:06:26.014260       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [4d04e68f07c0c2c838dc7a38787a9617099bdacc038927b4f03202edbdca0769] <==
	I1014 15:19:32.918323       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1014 15:19:32.918426       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1014 15:21:31.914437       1 handler_proxy.go:99] no RequestInfo found in the context
	E1014 15:21:31.914569       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1014 15:21:32.916764       1 handler_proxy.go:99] no RequestInfo found in the context
	E1014 15:21:32.916971       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1014 15:21:32.917163       1 handler_proxy.go:99] no RequestInfo found in the context
	E1014 15:21:32.917347       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1014 15:21:32.918196       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1014 15:21:32.918425       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1014 15:22:32.919450       1 handler_proxy.go:99] no RequestInfo found in the context
	E1014 15:22:32.919746       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1014 15:22:32.919477       1 handler_proxy.go:99] no RequestInfo found in the context
	E1014 15:22:32.919980       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1014 15:22:32.921144       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1014 15:22:32.921191       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [ea51e3357f925bf10bb4111435c3aecfbc82841d01ceab7c2dc8a43cf4f11b2c] <==
	E1014 15:18:08.967791       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:18:09.418852       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1014 15:18:10.790165       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="68.573µs"
	E1014 15:18:38.975301       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:18:39.427464       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:19:08.981761       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:19:09.435526       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:19:38.988191       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:19:39.445748       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:20:08.995093       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:20:09.454146       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:20:39.002401       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:20:39.462837       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:21:09.008479       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:21:09.470444       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:21:39.015900       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:21:39.477828       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1014 15:22:04.255612       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-989166"
	E1014 15:22:09.022224       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:22:09.485458       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:22:39.030221       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:22:39.496637       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:23:09.036441       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:23:09.505713       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1014 15:23:14.792298       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="235.662µs"
	
	
	==> kube-proxy [9ec492cd0941d666c3ab3edb5bf6b3195aa58f63059016eab320a7e64fccf2f3] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1014 15:06:40.427552       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1014 15:06:40.445814       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.41"]
	E1014 15:06:40.447326       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 15:06:40.536300       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1014 15:06:40.536366       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 15:06:40.536397       1 server_linux.go:169] "Using iptables Proxier"
	I1014 15:06:40.584301       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 15:06:40.584560       1 server.go:483] "Version info" version="v1.31.1"
	I1014 15:06:40.584588       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 15:06:40.588024       1 config.go:199] "Starting service config controller"
	I1014 15:06:40.588118       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 15:06:40.588169       1 config.go:105] "Starting endpoint slice config controller"
	I1014 15:06:40.588174       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 15:06:40.589004       1 config.go:328] "Starting node config controller"
	I1014 15:06:40.589031       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 15:06:40.688587       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1014 15:06:40.688671       1 shared_informer.go:320] Caches are synced for service config
	I1014 15:06:40.690750       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [41c5829ee86da69a6477dc7cb46fb180d63ec89af4631feda9ec441fd74a9381] <==
	W1014 15:06:31.922282       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1014 15:06:31.922320       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 15:06:31.922288       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1014 15:06:31.922506       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 15:06:32.777640       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1014 15:06:32.777753       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1014 15:06:32.876928       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1014 15:06:32.877245       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 15:06:32.952341       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1014 15:06:32.952936       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 15:06:33.064691       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1014 15:06:33.065104       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1014 15:06:33.101757       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1014 15:06:33.101977       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 15:06:33.101774       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1014 15:06:33.102066       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 15:06:33.104018       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1014 15:06:33.104079       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1014 15:06:33.124661       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1014 15:06:33.124938       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 15:06:33.129162       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1014 15:06:33.129264       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1014 15:06:33.163682       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1014 15:06:33.163794       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 15:06:36.113809       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 14 15:22:17 embed-certs-989166 kubelet[2868]: E1014 15:22:17.773788    2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jl6pp" podUID="c244e53d-c492-426a-be7f-d405f2defd17"
	Oct 14 15:22:25 embed-certs-989166 kubelet[2868]: E1014 15:22:25.043852    2868 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919345043425878,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:22:25 embed-certs-989166 kubelet[2868]: E1014 15:22:25.044008    2868 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919345043425878,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:22:30 embed-certs-989166 kubelet[2868]: E1014 15:22:30.774550    2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jl6pp" podUID="c244e53d-c492-426a-be7f-d405f2defd17"
	Oct 14 15:22:34 embed-certs-989166 kubelet[2868]: E1014 15:22:34.818960    2868 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 15:22:34 embed-certs-989166 kubelet[2868]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 15:22:34 embed-certs-989166 kubelet[2868]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 15:22:34 embed-certs-989166 kubelet[2868]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 15:22:34 embed-certs-989166 kubelet[2868]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 15:22:35 embed-certs-989166 kubelet[2868]: E1014 15:22:35.046063    2868 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919355045306774,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:22:35 embed-certs-989166 kubelet[2868]: E1014 15:22:35.046108    2868 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919355045306774,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:22:45 embed-certs-989166 kubelet[2868]: E1014 15:22:45.048116    2868 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919365047735372,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:22:45 embed-certs-989166 kubelet[2868]: E1014 15:22:45.048436    2868 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919365047735372,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:22:45 embed-certs-989166 kubelet[2868]: E1014 15:22:45.774058    2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jl6pp" podUID="c244e53d-c492-426a-be7f-d405f2defd17"
	Oct 14 15:22:55 embed-certs-989166 kubelet[2868]: E1014 15:22:55.051246    2868 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919375050708561,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:22:55 embed-certs-989166 kubelet[2868]: E1014 15:22:55.051776    2868 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919375050708561,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:22:59 embed-certs-989166 kubelet[2868]: E1014 15:22:59.880619    2868 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 14 15:22:59 embed-certs-989166 kubelet[2868]: E1014 15:22:59.880697    2868 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Oct 14 15:22:59 embed-certs-989166 kubelet[2868]: E1014 15:22:59.880944    2868 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pcbsq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation
:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Std
in:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-jl6pp_kube-system(c244e53d-c492-426a-be7f-d405f2defd17): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Oct 14 15:22:59 embed-certs-989166 kubelet[2868]: E1014 15:22:59.882322    2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-jl6pp" podUID="c244e53d-c492-426a-be7f-d405f2defd17"
	Oct 14 15:23:05 embed-certs-989166 kubelet[2868]: E1014 15:23:05.053298    2868 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919385052960579,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:23:05 embed-certs-989166 kubelet[2868]: E1014 15:23:05.053669    2868 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919385052960579,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:23:14 embed-certs-989166 kubelet[2868]: E1014 15:23:14.777074    2868 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-jl6pp" podUID="c244e53d-c492-426a-be7f-d405f2defd17"
	Oct 14 15:23:15 embed-certs-989166 kubelet[2868]: E1014 15:23:15.055837    2868 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919395055330276,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:23:15 embed-certs-989166 kubelet[2868]: E1014 15:23:15.056193    2868 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919395055330276,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134617,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [fdcf89c5b91436e10c03dbc9fea768588d72a4997a958dd457c29075913fe20f] <==
	I1014 15:06:41.859434       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1014 15:06:41.873373       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1014 15:06:41.873520       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1014 15:06:41.888094       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1014 15:06:41.888275       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-989166_0a3d1888-9541-478e-b17d-819ae5260e2d!
	I1014 15:06:41.889297       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"da1f4c41-9bb1-4afd-8cbf-fa16c3cfabf6", APIVersion:"v1", ResourceVersion:"393", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-989166_0a3d1888-9541-478e-b17d-819ae5260e2d became leader
	I1014 15:06:41.995474       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-989166_0a3d1888-9541-478e-b17d-819ae5260e2d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-989166 -n embed-certs-989166
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-989166 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-jl6pp
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-989166 describe pod metrics-server-6867b74b74-jl6pp
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-989166 describe pod metrics-server-6867b74b74-jl6pp: exit status 1 (65.310078ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-jl6pp" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-989166 describe pod metrics-server-6867b74b74-jl6pp: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (440.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (327.78s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-813300 -n no-preload-813300
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-10-14 15:22:31.432799753 +0000 UTC m=+6235.714148082
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-813300 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-813300 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-813300 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-813300 -n no-preload-813300
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-813300 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-813300 logs -n 25: (1.372009037s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-517678 sudo                                  | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-517678 sudo                                  | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-517678 sudo find                             | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-517678 sudo crio                             | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-517678                                       | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	| delete  | -p                                                     | disable-driver-mounts-887610 | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | disable-driver-mounts-887610                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:55 UTC |
	|         | default-k8s-diff-port-201291                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-813300             | no-preload-813300            | jenkins | v1.34.0 | 14 Oct 24 14:54 UTC | 14 Oct 24 14:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-813300                                   | no-preload-813300            | jenkins | v1.34.0 | 14 Oct 24 14:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-989166            | embed-certs-989166           | jenkins | v1.34.0 | 14 Oct 24 14:54 UTC | 14 Oct 24 14:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-989166                                  | embed-certs-989166           | jenkins | v1.34.0 | 14 Oct 24 14:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-201291  | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:55 UTC | 14 Oct 24 14:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:55 UTC |                     |
	|         | default-k8s-diff-port-201291                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-813300                  | no-preload-813300            | jenkins | v1.34.0 | 14 Oct 24 14:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-813300                                   | no-preload-813300            | jenkins | v1.34.0 | 14 Oct 24 14:56 UTC | 14 Oct 24 15:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-399767        | old-k8s-version-399767       | jenkins | v1.34.0 | 14 Oct 24 14:56 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-989166                 | embed-certs-989166           | jenkins | v1.34.0 | 14 Oct 24 14:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-989166                                  | embed-certs-989166           | jenkins | v1.34.0 | 14 Oct 24 14:57 UTC | 14 Oct 24 15:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-201291       | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:57 UTC | 14 Oct 24 15:06 UTC |
	|         | default-k8s-diff-port-201291                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-399767                              | old-k8s-version-399767       | jenkins | v1.34.0 | 14 Oct 24 14:58 UTC | 14 Oct 24 14:58 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-399767             | old-k8s-version-399767       | jenkins | v1.34.0 | 14 Oct 24 14:58 UTC | 14 Oct 24 14:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-399767                              | old-k8s-version-399767       | jenkins | v1.34.0 | 14 Oct 24 14:58 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-399767                              | old-k8s-version-399767       | jenkins | v1.34.0 | 14 Oct 24 15:21 UTC | 14 Oct 24 15:21 UTC |
	| start   | -p newest-cni-870289 --memory=2200 --alsologtostderr   | newest-cni-870289            | jenkins | v1.34.0 | 14 Oct 24 15:21 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 15:21:54
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 15:21:54.615513   79298 out.go:345] Setting OutFile to fd 1 ...
	I1014 15:21:54.615777   79298 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 15:21:54.615787   79298 out.go:358] Setting ErrFile to fd 2...
	I1014 15:21:54.615792   79298 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 15:21:54.615982   79298 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
	I1014 15:21:54.616607   79298 out.go:352] Setting JSON to false
	I1014 15:21:54.617585   79298 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":7465,"bootTime":1728911850,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 15:21:54.617648   79298 start.go:139] virtualization: kvm guest
	I1014 15:21:54.620205   79298 out.go:177] * [newest-cni-870289] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1014 15:21:54.621954   79298 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 15:21:54.622022   79298 notify.go:220] Checking for updates...
	I1014 15:21:54.624424   79298 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 15:21:54.625741   79298 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 15:21:54.627006   79298 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 15:21:54.628175   79298 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 15:21:54.629344   79298 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 15:21:54.630919   79298 config.go:182] Loaded profile config "default-k8s-diff-port-201291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 15:21:54.631051   79298 config.go:182] Loaded profile config "embed-certs-989166": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 15:21:54.631203   79298 config.go:182] Loaded profile config "no-preload-813300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 15:21:54.631281   79298 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 15:21:54.667145   79298 out.go:177] * Using the kvm2 driver based on user configuration
	I1014 15:21:54.668359   79298 start.go:297] selected driver: kvm2
	I1014 15:21:54.668372   79298 start.go:901] validating driver "kvm2" against <nil>
	I1014 15:21:54.668382   79298 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 15:21:54.669049   79298 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 15:21:54.669130   79298 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19790-7836/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 15:21:54.685678   79298 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1014 15:21:54.685745   79298 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1014 15:21:54.685821   79298 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1014 15:21:54.686106   79298 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1014 15:21:54.686149   79298 cni.go:84] Creating CNI manager for ""
	I1014 15:21:54.686214   79298 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:21:54.686232   79298 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1014 15:21:54.686296   79298 start.go:340] cluster config:
	{Name:newest-cni-870289 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-870289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 15:21:54.686427   79298 iso.go:125] acquiring lock: {Name:mk2e2e780b05ead4007a93e6b56d28c6081926bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 15:21:54.688248   79298 out.go:177] * Starting "newest-cni-870289" primary control-plane node in "newest-cni-870289" cluster
	I1014 15:21:54.689251   79298 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 15:21:54.689297   79298 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1014 15:21:54.689309   79298 cache.go:56] Caching tarball of preloaded images
	I1014 15:21:54.689386   79298 preload.go:172] Found /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 15:21:54.689397   79298 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1014 15:21:54.689477   79298 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289/config.json ...
	I1014 15:21:54.689492   79298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289/config.json: {Name:mk82b8a29933b73996383b2b38a2e15c6f48c225 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:21:54.689612   79298 start.go:360] acquireMachinesLock for newest-cni-870289: {Name:mk0b928e3a7c606d1470b2064477a22c5e59969d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 15:21:54.689638   79298 start.go:364] duration metric: took 13.844µs to acquireMachinesLock for "newest-cni-870289"
	I1014 15:21:54.689653   79298 start.go:93] Provisioning new machine with config: &{Name:newest-cni-870289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:newest-cni-870289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 15:21:54.689705   79298 start.go:125] createHost starting for "" (driver="kvm2")
	I1014 15:21:54.691238   79298 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1014 15:21:54.691400   79298 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:21:54.691458   79298 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:21:54.706943   79298 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34099
	I1014 15:21:54.707453   79298 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:21:54.707991   79298 main.go:141] libmachine: Using API Version  1
	I1014 15:21:54.708012   79298 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:21:54.708398   79298 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:21:54.708587   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetMachineName
	I1014 15:21:54.708759   79298 main.go:141] libmachine: (newest-cni-870289) Calling .DriverName
	I1014 15:21:54.708909   79298 start.go:159] libmachine.API.Create for "newest-cni-870289" (driver="kvm2")
	I1014 15:21:54.708942   79298 client.go:168] LocalClient.Create starting
	I1014 15:21:54.708976   79298 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem
	I1014 15:21:54.709009   79298 main.go:141] libmachine: Decoding PEM data...
	I1014 15:21:54.709023   79298 main.go:141] libmachine: Parsing certificate...
	I1014 15:21:54.709069   79298 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem
	I1014 15:21:54.709088   79298 main.go:141] libmachine: Decoding PEM data...
	I1014 15:21:54.709099   79298 main.go:141] libmachine: Parsing certificate...
	I1014 15:21:54.709122   79298 main.go:141] libmachine: Running pre-create checks...
	I1014 15:21:54.709137   79298 main.go:141] libmachine: (newest-cni-870289) Calling .PreCreateCheck
	I1014 15:21:54.709493   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetConfigRaw
	I1014 15:21:54.709847   79298 main.go:141] libmachine: Creating machine...
	I1014 15:21:54.709860   79298 main.go:141] libmachine: (newest-cni-870289) Calling .Create
	I1014 15:21:54.709998   79298 main.go:141] libmachine: (newest-cni-870289) Creating KVM machine...
	I1014 15:21:54.711261   79298 main.go:141] libmachine: (newest-cni-870289) DBG | found existing default KVM network
	I1014 15:21:54.712403   79298 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:21:54.712262   79322 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:0c:42:4e} reservation:<nil>}
	I1014 15:21:54.713214   79298 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:21:54.713153   79322 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:16:fd:dd} reservation:<nil>}
	I1014 15:21:54.713931   79298 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:21:54.713883   79322 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:df:73:25} reservation:<nil>}
	I1014 15:21:54.715099   79298 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:21:54.715023   79322 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000390fc0}
	I1014 15:21:54.715123   79298 main.go:141] libmachine: (newest-cni-870289) DBG | created network xml: 
	I1014 15:21:54.715135   79298 main.go:141] libmachine: (newest-cni-870289) DBG | <network>
	I1014 15:21:54.715143   79298 main.go:141] libmachine: (newest-cni-870289) DBG |   <name>mk-newest-cni-870289</name>
	I1014 15:21:54.715154   79298 main.go:141] libmachine: (newest-cni-870289) DBG |   <dns enable='no'/>
	I1014 15:21:54.715165   79298 main.go:141] libmachine: (newest-cni-870289) DBG |   
	I1014 15:21:54.715180   79298 main.go:141] libmachine: (newest-cni-870289) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I1014 15:21:54.715192   79298 main.go:141] libmachine: (newest-cni-870289) DBG |     <dhcp>
	I1014 15:21:54.715217   79298 main.go:141] libmachine: (newest-cni-870289) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I1014 15:21:54.715248   79298 main.go:141] libmachine: (newest-cni-870289) DBG |     </dhcp>
	I1014 15:21:54.715262   79298 main.go:141] libmachine: (newest-cni-870289) DBG |   </ip>
	I1014 15:21:54.715268   79298 main.go:141] libmachine: (newest-cni-870289) DBG |   
	I1014 15:21:54.715276   79298 main.go:141] libmachine: (newest-cni-870289) DBG | </network>
	I1014 15:21:54.715286   79298 main.go:141] libmachine: (newest-cni-870289) DBG | 
	I1014 15:21:54.720508   79298 main.go:141] libmachine: (newest-cni-870289) DBG | trying to create private KVM network mk-newest-cni-870289 192.168.72.0/24...
	I1014 15:21:54.797179   79298 main.go:141] libmachine: (newest-cni-870289) Setting up store path in /home/jenkins/minikube-integration/19790-7836/.minikube/machines/newest-cni-870289 ...
	I1014 15:21:54.797215   79298 main.go:141] libmachine: (newest-cni-870289) Building disk image from file:///home/jenkins/minikube-integration/19790-7836/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1014 15:21:54.797226   79298 main.go:141] libmachine: (newest-cni-870289) DBG | private KVM network mk-newest-cni-870289 192.168.72.0/24 created
	I1014 15:21:54.797245   79298 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:21:54.797117   79322 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 15:21:54.797281   79298 main.go:141] libmachine: (newest-cni-870289) Downloading /home/jenkins/minikube-integration/19790-7836/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/19790-7836/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso...
	I1014 15:21:55.042320   79298 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:21:55.042194   79322 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/newest-cni-870289/id_rsa...
	I1014 15:21:55.234678   79298 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:21:55.234534   79322 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/newest-cni-870289/newest-cni-870289.rawdisk...
	I1014 15:21:55.234712   79298 main.go:141] libmachine: (newest-cni-870289) DBG | Writing magic tar header
	I1014 15:21:55.234746   79298 main.go:141] libmachine: (newest-cni-870289) DBG | Writing SSH key tar header
	I1014 15:21:55.234815   79298 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:21:55.234723   79322 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/19790-7836/.minikube/machines/newest-cni-870289 ...
	I1014 15:21:55.234869   79298 main.go:141] libmachine: (newest-cni-870289) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/newest-cni-870289
	I1014 15:21:55.234889   79298 main.go:141] libmachine: (newest-cni-870289) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube/machines/newest-cni-870289 (perms=drwx------)
	I1014 15:21:55.234903   79298 main.go:141] libmachine: (newest-cni-870289) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube/machines
	I1014 15:21:55.234922   79298 main.go:141] libmachine: (newest-cni-870289) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 15:21:55.234936   79298 main.go:141] libmachine: (newest-cni-870289) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/19790-7836
	I1014 15:21:55.234951   79298 main.go:141] libmachine: (newest-cni-870289) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1014 15:21:55.234962   79298 main.go:141] libmachine: (newest-cni-870289) DBG | Checking permissions on dir: /home/jenkins
	I1014 15:21:55.234971   79298 main.go:141] libmachine: (newest-cni-870289) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube/machines (perms=drwxr-xr-x)
	I1014 15:21:55.234982   79298 main.go:141] libmachine: (newest-cni-870289) DBG | Checking permissions on dir: /home
	I1014 15:21:55.234996   79298 main.go:141] libmachine: (newest-cni-870289) DBG | Skipping /home - not owner
	I1014 15:21:55.235011   79298 main.go:141] libmachine: (newest-cni-870289) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836/.minikube (perms=drwxr-xr-x)
	I1014 15:21:55.235024   79298 main.go:141] libmachine: (newest-cni-870289) Setting executable bit set on /home/jenkins/minikube-integration/19790-7836 (perms=drwxrwxr-x)
	I1014 15:21:55.235035   79298 main.go:141] libmachine: (newest-cni-870289) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1014 15:21:55.235044   79298 main.go:141] libmachine: (newest-cni-870289) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1014 15:21:55.235055   79298 main.go:141] libmachine: (newest-cni-870289) Creating domain...
	I1014 15:21:55.236044   79298 main.go:141] libmachine: (newest-cni-870289) define libvirt domain using xml: 
	I1014 15:21:55.236067   79298 main.go:141] libmachine: (newest-cni-870289) <domain type='kvm'>
	I1014 15:21:55.236079   79298 main.go:141] libmachine: (newest-cni-870289)   <name>newest-cni-870289</name>
	I1014 15:21:55.236091   79298 main.go:141] libmachine: (newest-cni-870289)   <memory unit='MiB'>2200</memory>
	I1014 15:21:55.236117   79298 main.go:141] libmachine: (newest-cni-870289)   <vcpu>2</vcpu>
	I1014 15:21:55.236126   79298 main.go:141] libmachine: (newest-cni-870289)   <features>
	I1014 15:21:55.236132   79298 main.go:141] libmachine: (newest-cni-870289)     <acpi/>
	I1014 15:21:55.236139   79298 main.go:141] libmachine: (newest-cni-870289)     <apic/>
	I1014 15:21:55.236143   79298 main.go:141] libmachine: (newest-cni-870289)     <pae/>
	I1014 15:21:55.236150   79298 main.go:141] libmachine: (newest-cni-870289)     
	I1014 15:21:55.236155   79298 main.go:141] libmachine: (newest-cni-870289)   </features>
	I1014 15:21:55.236162   79298 main.go:141] libmachine: (newest-cni-870289)   <cpu mode='host-passthrough'>
	I1014 15:21:55.236187   79298 main.go:141] libmachine: (newest-cni-870289)   
	I1014 15:21:55.236208   79298 main.go:141] libmachine: (newest-cni-870289)   </cpu>
	I1014 15:21:55.236217   79298 main.go:141] libmachine: (newest-cni-870289)   <os>
	I1014 15:21:55.236224   79298 main.go:141] libmachine: (newest-cni-870289)     <type>hvm</type>
	I1014 15:21:55.236236   79298 main.go:141] libmachine: (newest-cni-870289)     <boot dev='cdrom'/>
	I1014 15:21:55.236243   79298 main.go:141] libmachine: (newest-cni-870289)     <boot dev='hd'/>
	I1014 15:21:55.236255   79298 main.go:141] libmachine: (newest-cni-870289)     <bootmenu enable='no'/>
	I1014 15:21:55.236264   79298 main.go:141] libmachine: (newest-cni-870289)   </os>
	I1014 15:21:55.236272   79298 main.go:141] libmachine: (newest-cni-870289)   <devices>
	I1014 15:21:55.236286   79298 main.go:141] libmachine: (newest-cni-870289)     <disk type='file' device='cdrom'>
	I1014 15:21:55.236302   79298 main.go:141] libmachine: (newest-cni-870289)       <source file='/home/jenkins/minikube-integration/19790-7836/.minikube/machines/newest-cni-870289/boot2docker.iso'/>
	I1014 15:21:55.236313   79298 main.go:141] libmachine: (newest-cni-870289)       <target dev='hdc' bus='scsi'/>
	I1014 15:21:55.236322   79298 main.go:141] libmachine: (newest-cni-870289)       <readonly/>
	I1014 15:21:55.236331   79298 main.go:141] libmachine: (newest-cni-870289)     </disk>
	I1014 15:21:55.236350   79298 main.go:141] libmachine: (newest-cni-870289)     <disk type='file' device='disk'>
	I1014 15:21:55.236365   79298 main.go:141] libmachine: (newest-cni-870289)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1014 15:21:55.236397   79298 main.go:141] libmachine: (newest-cni-870289)       <source file='/home/jenkins/minikube-integration/19790-7836/.minikube/machines/newest-cni-870289/newest-cni-870289.rawdisk'/>
	I1014 15:21:55.236408   79298 main.go:141] libmachine: (newest-cni-870289)       <target dev='hda' bus='virtio'/>
	I1014 15:21:55.236416   79298 main.go:141] libmachine: (newest-cni-870289)     </disk>
	I1014 15:21:55.236426   79298 main.go:141] libmachine: (newest-cni-870289)     <interface type='network'>
	I1014 15:21:55.236433   79298 main.go:141] libmachine: (newest-cni-870289)       <source network='mk-newest-cni-870289'/>
	I1014 15:21:55.236446   79298 main.go:141] libmachine: (newest-cni-870289)       <model type='virtio'/>
	I1014 15:21:55.236457   79298 main.go:141] libmachine: (newest-cni-870289)     </interface>
	I1014 15:21:55.236467   79298 main.go:141] libmachine: (newest-cni-870289)     <interface type='network'>
	I1014 15:21:55.236477   79298 main.go:141] libmachine: (newest-cni-870289)       <source network='default'/>
	I1014 15:21:55.236487   79298 main.go:141] libmachine: (newest-cni-870289)       <model type='virtio'/>
	I1014 15:21:55.236495   79298 main.go:141] libmachine: (newest-cni-870289)     </interface>
	I1014 15:21:55.236505   79298 main.go:141] libmachine: (newest-cni-870289)     <serial type='pty'>
	I1014 15:21:55.236527   79298 main.go:141] libmachine: (newest-cni-870289)       <target port='0'/>
	I1014 15:21:55.236549   79298 main.go:141] libmachine: (newest-cni-870289)     </serial>
	I1014 15:21:55.236558   79298 main.go:141] libmachine: (newest-cni-870289)     <console type='pty'>
	I1014 15:21:55.236569   79298 main.go:141] libmachine: (newest-cni-870289)       <target type='serial' port='0'/>
	I1014 15:21:55.236580   79298 main.go:141] libmachine: (newest-cni-870289)     </console>
	I1014 15:21:55.236590   79298 main.go:141] libmachine: (newest-cni-870289)     <rng model='virtio'>
	I1014 15:21:55.236606   79298 main.go:141] libmachine: (newest-cni-870289)       <backend model='random'>/dev/random</backend>
	I1014 15:21:55.236621   79298 main.go:141] libmachine: (newest-cni-870289)     </rng>
	I1014 15:21:55.236632   79298 main.go:141] libmachine: (newest-cni-870289)     
	I1014 15:21:55.236641   79298 main.go:141] libmachine: (newest-cni-870289)     
	I1014 15:21:55.236649   79298 main.go:141] libmachine: (newest-cni-870289)   </devices>
	I1014 15:21:55.236666   79298 main.go:141] libmachine: (newest-cni-870289) </domain>
	I1014 15:21:55.236679   79298 main.go:141] libmachine: (newest-cni-870289) 
	I1014 15:21:55.241226   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:d2:2e:69 in network default
	I1014 15:21:55.241848   79298 main.go:141] libmachine: (newest-cni-870289) Ensuring networks are active...
	I1014 15:21:55.241871   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:21:55.242779   79298 main.go:141] libmachine: (newest-cni-870289) Ensuring network default is active
	I1014 15:21:55.243120   79298 main.go:141] libmachine: (newest-cni-870289) Ensuring network mk-newest-cni-870289 is active
	I1014 15:21:55.243705   79298 main.go:141] libmachine: (newest-cni-870289) Getting domain xml...
	I1014 15:21:55.244412   79298 main.go:141] libmachine: (newest-cni-870289) Creating domain...
	I1014 15:21:56.504310   79298 main.go:141] libmachine: (newest-cni-870289) Waiting to get IP...
	I1014 15:21:56.505177   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:21:56.505600   79298 main.go:141] libmachine: (newest-cni-870289) DBG | unable to find current IP address of domain newest-cni-870289 in network mk-newest-cni-870289
	I1014 15:21:56.505667   79298 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:21:56.505598   79322 retry.go:31] will retry after 298.64582ms: waiting for machine to come up
	I1014 15:21:56.806166   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:21:56.806752   79298 main.go:141] libmachine: (newest-cni-870289) DBG | unable to find current IP address of domain newest-cni-870289 in network mk-newest-cni-870289
	I1014 15:21:56.806781   79298 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:21:56.806699   79322 retry.go:31] will retry after 337.272785ms: waiting for machine to come up
	I1014 15:21:57.146217   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:21:57.146663   79298 main.go:141] libmachine: (newest-cni-870289) DBG | unable to find current IP address of domain newest-cni-870289 in network mk-newest-cni-870289
	I1014 15:21:57.146688   79298 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:21:57.146631   79322 retry.go:31] will retry after 416.124205ms: waiting for machine to come up
	I1014 15:21:57.564088   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:21:57.564560   79298 main.go:141] libmachine: (newest-cni-870289) DBG | unable to find current IP address of domain newest-cni-870289 in network mk-newest-cni-870289
	I1014 15:21:57.564588   79298 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:21:57.564530   79322 retry.go:31] will retry after 585.042248ms: waiting for machine to come up
	I1014 15:21:58.151313   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:21:58.151741   79298 main.go:141] libmachine: (newest-cni-870289) DBG | unable to find current IP address of domain newest-cni-870289 in network mk-newest-cni-870289
	I1014 15:21:58.151778   79298 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:21:58.151696   79322 retry.go:31] will retry after 561.443458ms: waiting for machine to come up
	I1014 15:21:58.714454   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:21:58.714910   79298 main.go:141] libmachine: (newest-cni-870289) DBG | unable to find current IP address of domain newest-cni-870289 in network mk-newest-cni-870289
	I1014 15:21:58.714997   79298 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:21:58.714897   79322 retry.go:31] will retry after 806.983793ms: waiting for machine to come up
	I1014 15:21:59.523784   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:21:59.524227   79298 main.go:141] libmachine: (newest-cni-870289) DBG | unable to find current IP address of domain newest-cni-870289 in network mk-newest-cni-870289
	I1014 15:21:59.524254   79298 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:21:59.524184   79322 retry.go:31] will retry after 768.858179ms: waiting for machine to come up
	I1014 15:22:00.294806   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:00.295247   79298 main.go:141] libmachine: (newest-cni-870289) DBG | unable to find current IP address of domain newest-cni-870289 in network mk-newest-cni-870289
	I1014 15:22:00.295284   79298 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:22:00.295224   79322 retry.go:31] will retry after 1.259649906s: waiting for machine to come up
	I1014 15:22:01.556977   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:01.557422   79298 main.go:141] libmachine: (newest-cni-870289) DBG | unable to find current IP address of domain newest-cni-870289 in network mk-newest-cni-870289
	I1014 15:22:01.557455   79298 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:22:01.557355   79322 retry.go:31] will retry after 1.235935239s: waiting for machine to come up
	I1014 15:22:02.794882   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:02.795272   79298 main.go:141] libmachine: (newest-cni-870289) DBG | unable to find current IP address of domain newest-cni-870289 in network mk-newest-cni-870289
	I1014 15:22:02.795287   79298 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:22:02.795229   79322 retry.go:31] will retry after 2.091314192s: waiting for machine to come up
	I1014 15:22:04.887952   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:04.888475   79298 main.go:141] libmachine: (newest-cni-870289) DBG | unable to find current IP address of domain newest-cni-870289 in network mk-newest-cni-870289
	I1014 15:22:04.888784   79298 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:22:04.888482   79322 retry.go:31] will retry after 2.188896249s: waiting for machine to come up
	I1014 15:22:07.079806   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:07.080307   79298 main.go:141] libmachine: (newest-cni-870289) DBG | unable to find current IP address of domain newest-cni-870289 in network mk-newest-cni-870289
	I1014 15:22:07.080333   79298 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:22:07.080266   79322 retry.go:31] will retry after 2.729163532s: waiting for machine to come up
	I1014 15:22:09.811246   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:09.811595   79298 main.go:141] libmachine: (newest-cni-870289) DBG | unable to find current IP address of domain newest-cni-870289 in network mk-newest-cni-870289
	I1014 15:22:09.811620   79298 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:22:09.811560   79322 retry.go:31] will retry after 3.254830986s: waiting for machine to come up
	I1014 15:22:13.069981   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:13.070394   79298 main.go:141] libmachine: (newest-cni-870289) DBG | unable to find current IP address of domain newest-cni-870289 in network mk-newest-cni-870289
	I1014 15:22:13.070419   79298 main.go:141] libmachine: (newest-cni-870289) DBG | I1014 15:22:13.070350   79322 retry.go:31] will retry after 4.183025531s: waiting for machine to come up
	I1014 15:22:17.257126   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:17.257676   79298 main.go:141] libmachine: (newest-cni-870289) Found IP for machine: 192.168.72.98
	I1014 15:22:17.257695   79298 main.go:141] libmachine: (newest-cni-870289) Reserving static IP address...
	I1014 15:22:17.257704   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has current primary IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:17.258032   79298 main.go:141] libmachine: (newest-cni-870289) DBG | unable to find host DHCP lease matching {name: "newest-cni-870289", mac: "52:54:00:7d:a1:9e", ip: "192.168.72.98"} in network mk-newest-cni-870289
	I1014 15:22:17.335672   79298 main.go:141] libmachine: (newest-cni-870289) DBG | Getting to WaitForSSH function...
	I1014 15:22:17.335704   79298 main.go:141] libmachine: (newest-cni-870289) Reserved static IP address: 192.168.72.98
	I1014 15:22:17.335748   79298 main.go:141] libmachine: (newest-cni-870289) Waiting for SSH to be available...
	I1014 15:22:17.338329   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:17.338722   79298 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:22:09 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:minikube Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:22:17.338754   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:17.338847   79298 main.go:141] libmachine: (newest-cni-870289) DBG | Using SSH client type: external
	I1014 15:22:17.338872   79298 main.go:141] libmachine: (newest-cni-870289) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/newest-cni-870289/id_rsa (-rw-------)
	I1014 15:22:17.338921   79298 main.go:141] libmachine: (newest-cni-870289) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.98 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/newest-cni-870289/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 15:22:17.338938   79298 main.go:141] libmachine: (newest-cni-870289) DBG | About to run SSH command:
	I1014 15:22:17.338959   79298 main.go:141] libmachine: (newest-cni-870289) DBG | exit 0
	I1014 15:22:17.470768   79298 main.go:141] libmachine: (newest-cni-870289) DBG | SSH cmd err, output: <nil>: 
	I1014 15:22:17.470991   79298 main.go:141] libmachine: (newest-cni-870289) KVM machine creation complete!
	I1014 15:22:17.471312   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetConfigRaw
	I1014 15:22:17.471818   79298 main.go:141] libmachine: (newest-cni-870289) Calling .DriverName
	I1014 15:22:17.471979   79298 main.go:141] libmachine: (newest-cni-870289) Calling .DriverName
	I1014 15:22:17.472150   79298 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1014 15:22:17.472166   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetState
	I1014 15:22:17.473718   79298 main.go:141] libmachine: Detecting operating system of created instance...
	I1014 15:22:17.473747   79298 main.go:141] libmachine: Waiting for SSH to be available...
	I1014 15:22:17.473757   79298 main.go:141] libmachine: Getting to WaitForSSH function...
	I1014 15:22:17.473767   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHHostname
	I1014 15:22:17.476082   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:17.476428   79298 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:22:09 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:22:17.476447   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:17.476603   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHPort
	I1014 15:22:17.476779   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:22:17.476928   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:22:17.477070   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHUsername
	I1014 15:22:17.477233   79298 main.go:141] libmachine: Using SSH client type: native
	I1014 15:22:17.477436   79298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.98 22 <nil> <nil>}
	I1014 15:22:17.477448   79298 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1014 15:22:17.589910   79298 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 15:22:17.589931   79298 main.go:141] libmachine: Detecting the provisioner...
	I1014 15:22:17.589939   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHHostname
	I1014 15:22:17.592791   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:17.593166   79298 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:22:09 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:22:17.593190   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:17.593333   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHPort
	I1014 15:22:17.593501   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:22:17.593654   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:22:17.593763   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHUsername
	I1014 15:22:17.593959   79298 main.go:141] libmachine: Using SSH client type: native
	I1014 15:22:17.594125   79298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.98 22 <nil> <nil>}
	I1014 15:22:17.594135   79298 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1014 15:22:17.711510   79298 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1014 15:22:17.711579   79298 main.go:141] libmachine: found compatible host: buildroot
	I1014 15:22:17.711589   79298 main.go:141] libmachine: Provisioning with buildroot...
	I1014 15:22:17.711596   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetMachineName
	I1014 15:22:17.711870   79298 buildroot.go:166] provisioning hostname "newest-cni-870289"
	I1014 15:22:17.711899   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetMachineName
	I1014 15:22:17.712130   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHHostname
	I1014 15:22:17.714982   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:17.715330   79298 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:22:09 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:22:17.715352   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:17.715561   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHPort
	I1014 15:22:17.715730   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:22:17.715868   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:22:17.716034   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHUsername
	I1014 15:22:17.716200   79298 main.go:141] libmachine: Using SSH client type: native
	I1014 15:22:17.716378   79298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.98 22 <nil> <nil>}
	I1014 15:22:17.716394   79298 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-870289 && echo "newest-cni-870289" | sudo tee /etc/hostname
	I1014 15:22:17.849653   79298 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-870289
	
	I1014 15:22:17.849690   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHHostname
	I1014 15:22:17.852292   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:17.852589   79298 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:22:09 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:22:17.852632   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:17.852794   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHPort
	I1014 15:22:17.852959   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:22:17.853095   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:22:17.853251   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHUsername
	I1014 15:22:17.853430   79298 main.go:141] libmachine: Using SSH client type: native
	I1014 15:22:17.853631   79298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.98 22 <nil> <nil>}
	I1014 15:22:17.853648   79298 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-870289' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-870289/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-870289' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 15:22:17.975842   79298 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 15:22:17.975878   79298 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 15:22:17.975920   79298 buildroot.go:174] setting up certificates
	I1014 15:22:17.975933   79298 provision.go:84] configureAuth start
	I1014 15:22:17.975945   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetMachineName
	I1014 15:22:17.976222   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetIP
	I1014 15:22:17.978832   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:17.979095   79298 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:22:09 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:22:17.979120   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:17.979283   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHHostname
	I1014 15:22:17.981207   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:17.981472   79298 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:22:09 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:22:17.981499   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:17.981630   79298 provision.go:143] copyHostCerts
	I1014 15:22:17.981688   79298 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 15:22:17.981711   79298 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 15:22:17.981801   79298 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 15:22:17.981930   79298 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 15:22:17.981943   79298 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 15:22:17.981986   79298 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 15:22:17.982070   79298 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 15:22:17.982080   79298 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 15:22:17.982114   79298 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 15:22:17.982188   79298 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.newest-cni-870289 san=[127.0.0.1 192.168.72.98 localhost minikube newest-cni-870289]
	I1014 15:22:18.053421   79298 provision.go:177] copyRemoteCerts
	I1014 15:22:18.053472   79298 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 15:22:18.053494   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHHostname
	I1014 15:22:18.056200   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:18.056560   79298 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:22:09 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:22:18.056582   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:18.056756   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHPort
	I1014 15:22:18.056931   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:22:18.057095   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHUsername
	I1014 15:22:18.057261   79298 sshutil.go:53] new ssh client: &{IP:192.168.72.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/newest-cni-870289/id_rsa Username:docker}
	I1014 15:22:18.145291   79298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 15:22:18.171105   79298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1014 15:22:18.196957   79298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 15:22:18.223428   79298 provision.go:87] duration metric: took 247.477023ms to configureAuth
	I1014 15:22:18.223454   79298 buildroot.go:189] setting minikube options for container-runtime
	I1014 15:22:18.223624   79298 config.go:182] Loaded profile config "newest-cni-870289": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 15:22:18.223697   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHHostname
	I1014 15:22:18.226376   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:18.226692   79298 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:22:09 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:22:18.226738   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:18.226879   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHPort
	I1014 15:22:18.227046   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:22:18.227179   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:22:18.227303   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHUsername
	I1014 15:22:18.227469   79298 main.go:141] libmachine: Using SSH client type: native
	I1014 15:22:18.227691   79298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.98 22 <nil> <nil>}
	I1014 15:22:18.227715   79298 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 15:22:18.460640   79298 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 15:22:18.460666   79298 main.go:141] libmachine: Checking connection to Docker...
	I1014 15:22:18.460678   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetURL
	I1014 15:22:18.462040   79298 main.go:141] libmachine: (newest-cni-870289) DBG | Using libvirt version 6000000
	I1014 15:22:18.464635   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:18.464940   79298 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:22:09 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:22:18.464972   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:18.465099   79298 main.go:141] libmachine: Docker is up and running!
	I1014 15:22:18.465116   79298 main.go:141] libmachine: Reticulating splines...
	I1014 15:22:18.465141   79298 client.go:171] duration metric: took 23.756170915s to LocalClient.Create
	I1014 15:22:18.465168   79298 start.go:167] duration metric: took 23.756260354s to libmachine.API.Create "newest-cni-870289"
	I1014 15:22:18.465180   79298 start.go:293] postStartSetup for "newest-cni-870289" (driver="kvm2")
	I1014 15:22:18.465195   79298 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 15:22:18.465215   79298 main.go:141] libmachine: (newest-cni-870289) Calling .DriverName
	I1014 15:22:18.465454   79298 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 15:22:18.465477   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHHostname
	I1014 15:22:18.467459   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:18.467728   79298 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:22:09 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:22:18.467769   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:18.467891   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHPort
	I1014 15:22:18.468053   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:22:18.468187   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHUsername
	I1014 15:22:18.468300   79298 sshutil.go:53] new ssh client: &{IP:192.168.72.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/newest-cni-870289/id_rsa Username:docker}
	I1014 15:22:18.558972   79298 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 15:22:18.563568   79298 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 15:22:18.563598   79298 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 15:22:18.563670   79298 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 15:22:18.563755   79298 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 15:22:18.563846   79298 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 15:22:18.574216   79298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:22:18.599743   79298 start.go:296] duration metric: took 134.545896ms for postStartSetup
	I1014 15:22:18.599803   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetConfigRaw
	I1014 15:22:18.600633   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetIP
	I1014 15:22:18.603513   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:18.603840   79298 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:22:09 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:22:18.603886   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:18.604142   79298 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289/config.json ...
	I1014 15:22:18.604361   79298 start.go:128] duration metric: took 23.9146455s to createHost
	I1014 15:22:18.604393   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHHostname
	I1014 15:22:18.607503   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:18.607849   79298 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:22:09 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:22:18.607880   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:18.608000   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHPort
	I1014 15:22:18.608187   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:22:18.608355   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:22:18.608516   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHUsername
	I1014 15:22:18.608683   79298 main.go:141] libmachine: Using SSH client type: native
	I1014 15:22:18.608876   79298 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.98 22 <nil> <nil>}
	I1014 15:22:18.608888   79298 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 15:22:18.723703   79298 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728919338.690079737
	
	I1014 15:22:18.723730   79298 fix.go:216] guest clock: 1728919338.690079737
	I1014 15:22:18.723740   79298 fix.go:229] Guest: 2024-10-14 15:22:18.690079737 +0000 UTC Remote: 2024-10-14 15:22:18.604380442 +0000 UTC m=+24.028641667 (delta=85.699295ms)
	I1014 15:22:18.723765   79298 fix.go:200] guest clock delta is within tolerance: 85.699295ms
	I1014 15:22:18.723771   79298 start.go:83] releasing machines lock for "newest-cni-870289", held for 24.034124394s
	I1014 15:22:18.723793   79298 main.go:141] libmachine: (newest-cni-870289) Calling .DriverName
	I1014 15:22:18.724039   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetIP
	I1014 15:22:18.726935   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:18.727405   79298 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:22:09 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:22:18.727444   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:18.727676   79298 main.go:141] libmachine: (newest-cni-870289) Calling .DriverName
	I1014 15:22:18.728174   79298 main.go:141] libmachine: (newest-cni-870289) Calling .DriverName
	I1014 15:22:18.728357   79298 main.go:141] libmachine: (newest-cni-870289) Calling .DriverName
	I1014 15:22:18.728457   79298 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 15:22:18.728532   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHHostname
	I1014 15:22:18.728571   79298 ssh_runner.go:195] Run: cat /version.json
	I1014 15:22:18.728593   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHHostname
	I1014 15:22:18.731508   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:18.731857   79298 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:22:09 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:22:18.731884   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:18.731902   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:18.732049   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHPort
	I1014 15:22:18.732216   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:22:18.732355   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHUsername
	I1014 15:22:18.732359   79298 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:22:09 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:22:18.732437   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:18.732539   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHPort
	I1014 15:22:18.732559   79298 sshutil.go:53] new ssh client: &{IP:192.168.72.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/newest-cni-870289/id_rsa Username:docker}
	I1014 15:22:18.732693   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHKeyPath
	I1014 15:22:18.732841   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetSSHUsername
	I1014 15:22:18.733003   79298 sshutil.go:53] new ssh client: &{IP:192.168.72.98 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/newest-cni-870289/id_rsa Username:docker}
	I1014 15:22:18.848500   79298 ssh_runner.go:195] Run: systemctl --version
	I1014 15:22:18.854755   79298 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 15:22:19.018462   79298 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 15:22:19.024604   79298 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 15:22:19.024672   79298 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 15:22:19.042255   79298 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 15:22:19.042285   79298 start.go:495] detecting cgroup driver to use...
	I1014 15:22:19.042363   79298 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 15:22:19.061370   79298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 15:22:19.076556   79298 docker.go:217] disabling cri-docker service (if available) ...
	I1014 15:22:19.076623   79298 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 15:22:19.091170   79298 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 15:22:19.106886   79298 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 15:22:19.228706   79298 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 15:22:19.381111   79298 docker.go:233] disabling docker service ...
	I1014 15:22:19.381186   79298 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 15:22:19.395198   79298 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 15:22:19.409002   79298 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 15:22:19.558021   79298 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 15:22:19.698434   79298 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 15:22:19.713831   79298 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 15:22:19.739445   79298 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 15:22:19.739507   79298 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:22:19.752132   79298 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 15:22:19.752192   79298 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:22:19.764267   79298 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:22:19.776328   79298 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:22:19.786910   79298 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 15:22:19.799234   79298 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:22:19.810149   79298 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:22:19.829270   79298 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:22:19.840058   79298 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 15:22:19.849617   79298 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 15:22:19.849662   79298 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 15:22:19.863635   79298 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 15:22:19.874657   79298 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:22:20.011259   79298 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 15:22:20.107140   79298 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 15:22:20.107240   79298 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 15:22:20.112202   79298 start.go:563] Will wait 60s for crictl version
	I1014 15:22:20.112254   79298 ssh_runner.go:195] Run: which crictl
	I1014 15:22:20.116060   79298 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 15:22:20.153519   79298 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 15:22:20.153608   79298 ssh_runner.go:195] Run: crio --version
	I1014 15:22:20.183274   79298 ssh_runner.go:195] Run: crio --version
	I1014 15:22:20.213872   79298 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1014 15:22:20.215164   79298 main.go:141] libmachine: (newest-cni-870289) Calling .GetIP
	I1014 15:22:20.217820   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:20.218381   79298 main.go:141] libmachine: (newest-cni-870289) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:a1:9e", ip: ""} in network mk-newest-cni-870289: {Iface:virbr4 ExpiryTime:2024-10-14 16:22:09 +0000 UTC Type:0 Mac:52:54:00:7d:a1:9e Iaid: IPaddr:192.168.72.98 Prefix:24 Hostname:newest-cni-870289 Clientid:01:52:54:00:7d:a1:9e}
	I1014 15:22:20.218421   79298 main.go:141] libmachine: (newest-cni-870289) DBG | domain newest-cni-870289 has defined IP address 192.168.72.98 and MAC address 52:54:00:7d:a1:9e in network mk-newest-cni-870289
	I1014 15:22:20.218508   79298 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1014 15:22:20.223613   79298 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:22:20.238947   79298 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1014 15:22:20.240219   79298 kubeadm.go:883] updating cluster {Name:newest-cni-870289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:newest-cni-870289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.98 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 15:22:20.240328   79298 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 15:22:20.240382   79298 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:22:20.275166   79298 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1014 15:22:20.275253   79298 ssh_runner.go:195] Run: which lz4
	I1014 15:22:20.279532   79298 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 15:22:20.283750   79298 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 15:22:20.283784   79298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1014 15:22:21.696015   79298 crio.go:462] duration metric: took 1.416510663s to copy over tarball
	I1014 15:22:21.696090   79298 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 15:22:23.757167   79298 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.061053035s)
	I1014 15:22:23.757204   79298 crio.go:469] duration metric: took 2.061159955s to extract the tarball
	I1014 15:22:23.757212   79298 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 15:22:23.795256   79298 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:22:23.843160   79298 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 15:22:23.843184   79298 cache_images.go:84] Images are preloaded, skipping loading
	I1014 15:22:23.843191   79298 kubeadm.go:934] updating node { 192.168.72.98 8443 v1.31.1 crio true true} ...
	I1014 15:22:23.843283   79298 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-870289 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.98
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:newest-cni-870289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 15:22:23.843392   79298 ssh_runner.go:195] Run: crio config
	I1014 15:22:23.892847   79298 cni.go:84] Creating CNI manager for ""
	I1014 15:22:23.892871   79298 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:22:23.892882   79298 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1014 15:22:23.892902   79298 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.98 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-870289 NodeName:newest-cni-870289 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.98"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[
] NodeIP:192.168.72.98 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 15:22:23.893568   79298 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.98
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-870289"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.98"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.98"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "feature-gates"
	      value: "ServerSideApply=true"
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 15:22:23.893665   79298 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 15:22:23.904926   79298 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 15:22:23.905010   79298 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 15:22:23.915440   79298 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (353 bytes)
	I1014 15:22:23.933671   79298 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 15:22:23.952578   79298 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2484 bytes)
	I1014 15:22:23.971771   79298 ssh_runner.go:195] Run: grep 192.168.72.98	control-plane.minikube.internal$ /etc/hosts
	I1014 15:22:23.975986   79298 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.98	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:22:23.989575   79298 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:22:24.111604   79298 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:22:24.129496   79298 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289 for IP: 192.168.72.98
	I1014 15:22:24.129521   79298 certs.go:194] generating shared ca certs ...
	I1014 15:22:24.129541   79298 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:22:24.129730   79298 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 15:22:24.129787   79298 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 15:22:24.129800   79298 certs.go:256] generating profile certs ...
	I1014 15:22:24.129875   79298 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289/client.key
	I1014 15:22:24.129894   79298 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289/client.crt with IP's: []
	I1014 15:22:24.337292   79298 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289/client.crt ...
	I1014 15:22:24.337319   79298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289/client.crt: {Name:mkc8b0215a34a428369fd3f90a355734b3341068 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:22:24.337484   79298 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289/client.key ...
	I1014 15:22:24.337495   79298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289/client.key: {Name:mk8a6ef1e143d976b5b1d52899e6a2c97484a473 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:22:24.337569   79298 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289/apiserver.key.5e9d2aba
	I1014 15:22:24.337587   79298 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289/apiserver.crt.5e9d2aba with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.98]
	I1014 15:22:24.499154   79298 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289/apiserver.crt.5e9d2aba ...
	I1014 15:22:24.499180   79298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289/apiserver.crt.5e9d2aba: {Name:mk04ddc6eb6a73d345b5309a60243e4231b9e645 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:22:24.499345   79298 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289/apiserver.key.5e9d2aba ...
	I1014 15:22:24.499358   79298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289/apiserver.key.5e9d2aba: {Name:mk570ccdd996d4c4d0b15a9c9ede37a4545e28e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:22:24.499431   79298 certs.go:381] copying /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289/apiserver.crt.5e9d2aba -> /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289/apiserver.crt
	I1014 15:22:24.499519   79298 certs.go:385] copying /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289/apiserver.key.5e9d2aba -> /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289/apiserver.key
	I1014 15:22:24.499582   79298 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289/proxy-client.key
	I1014 15:22:24.499600   79298 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289/proxy-client.crt with IP's: []
	I1014 15:22:24.728323   79298 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289/proxy-client.crt ...
	I1014 15:22:24.728371   79298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289/proxy-client.crt: {Name:mkda635afaa018a7dc3338987f85aa67cd41ba8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:22:24.728543   79298 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289/proxy-client.key ...
	I1014 15:22:24.728559   79298 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289/proxy-client.key: {Name:mk5be94e5a60b87e526458a872d120766207895a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:22:24.728726   79298 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 15:22:24.728762   79298 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 15:22:24.728772   79298 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 15:22:24.728804   79298 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 15:22:24.728834   79298 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 15:22:24.728857   79298 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 15:22:24.728896   79298 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:22:24.729472   79298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 15:22:24.757141   79298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 15:22:24.785502   79298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 15:22:24.811204   79298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 15:22:24.838914   79298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 15:22:24.866935   79298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 15:22:24.893165   79298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 15:22:24.921563   79298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/newest-cni-870289/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 15:22:24.960304   79298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 15:22:24.988052   79298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 15:22:25.014152   79298 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 15:22:25.038968   79298 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 15:22:25.059693   79298 ssh_runner.go:195] Run: openssl version
	I1014 15:22:25.066378   79298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 15:22:25.079317   79298 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:22:25.084374   79298 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:22:25.084442   79298 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:22:25.090647   79298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 15:22:25.104170   79298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 15:22:25.116781   79298 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 15:22:25.121956   79298 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 15:22:25.122015   79298 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 15:22:25.129010   79298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 15:22:25.141326   79298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 15:22:25.153040   79298 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 15:22:25.158261   79298 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 15:22:25.158317   79298 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 15:22:25.164484   79298 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 15:22:25.176793   79298 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 15:22:25.182472   79298 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 15:22:25.182540   79298 kubeadm.go:392] StartCluster: {Name:newest-cni-870289 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:newest-cni-870289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.98 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 15:22:25.182650   79298 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 15:22:25.182706   79298 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:22:25.226448   79298 cri.go:89] found id: ""
	I1014 15:22:25.226526   79298 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 15:22:25.237799   79298 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:22:25.250404   79298 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:22:25.263164   79298 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:22:25.263190   79298 kubeadm.go:157] found existing configuration files:
	
	I1014 15:22:25.263240   79298 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:22:25.273313   79298 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:22:25.273370   79298 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:22:25.284489   79298 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:22:25.295903   79298 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:22:25.295976   79298 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:22:25.308012   79298 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:22:25.318095   79298 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:22:25.318167   79298 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:22:25.328841   79298 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:22:25.338300   79298 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:22:25.338369   79298 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:22:25.348348   79298 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 15:22:25.468008   79298 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1014 15:22:25.468091   79298 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 15:22:25.576822   79298 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 15:22:25.576972   79298 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 15:22:25.577110   79298 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 15:22:25.589892   79298 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 15:22:25.646655   79298 out.go:235]   - Generating certificates and keys ...
	I1014 15:22:25.646765   79298 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 15:22:25.646853   79298 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 15:22:25.788835   79298 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 15:22:25.861953   79298 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1014 15:22:26.393579   79298 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1014 15:22:26.499662   79298 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1014 15:22:26.610460   79298 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1014 15:22:26.610632   79298 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-870289] and IPs [192.168.72.98 127.0.0.1 ::1]
	I1014 15:22:26.742697   79298 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1014 15:22:26.742859   79298 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-870289] and IPs [192.168.72.98 127.0.0.1 ::1]
	I1014 15:22:26.852298   79298 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 15:22:27.069416   79298 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 15:22:27.281986   79298 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1014 15:22:27.282238   79298 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 15:22:27.360818   79298 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 15:22:27.584059   79298 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 15:22:27.680611   79298 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 15:22:27.803673   79298 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 15:22:28.029979   79298 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 15:22:28.030675   79298 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 15:22:28.036294   79298 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 15:22:28.079023   79298 out.go:235]   - Booting up control plane ...
	I1014 15:22:28.079146   79298 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 15:22:28.079238   79298 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 15:22:28.079295   79298 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 15:22:28.079442   79298 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 15:22:28.079538   79298 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 15:22:28.079574   79298 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 15:22:28.202256   79298 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 15:22:28.202417   79298 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 15:22:29.204118   79298 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002421804s
	I1014 15:22:29.204245   79298 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	
	
	==> CRI-O <==
	Oct 14 15:22:32 no-preload-813300 crio[711]: time="2024-10-14 15:22:32.158047247Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919352158020989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=909de97a-bc81-481b-88bd-b51861691c75 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:22:32 no-preload-813300 crio[711]: time="2024-10-14 15:22:32.158503259Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8f9b3478-bf59-4df3-b7cf-27eeef90af4e name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:22:32 no-preload-813300 crio[711]: time="2024-10-14 15:22:32.158582975Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8f9b3478-bf59-4df3-b7cf-27eeef90af4e name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:22:32 no-preload-813300 crio[711]: time="2024-10-14 15:22:32.158901476Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2fe5212fe3ebb4271fe4f9776bdc95ea7bbd4aea70456281189b86f4d9323675,PodSandboxId:f65d8d057416b9747163461409fb02e63baaccda87f3ed4616b7db10021cb917,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728918474205353910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d79bfdf-bda5-42bf-8ddf-73d7df4855db,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03f2753934798cf6791abe08ae9795458185ad9eac0059ead3d1cb94cd908b3d,PodSandboxId:a2836801e53a0d69e53404fa44e5147c5184d5687c2b8897aac5baefa29d07c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728918473444794126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nvpvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d926987d-9c61-4bf6-83e3-97334715e1d5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:739b2529f0fdf4a73a5502b2fc856d948eccb8dbbae56a6b9d08c0413c0279ad,PodSandboxId:6a29826877e8796338270b89184f647f0265e8cf65ed745db5eef2d9500d98a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728918473219503640,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fjzn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78
50936e-8104-4e8f-a4cc-948579963790,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:842c65533db30209091d4a7fd1a556d412dd451dfa31d61fa9b9090e674419a6,PodSandboxId:2da3b6fbd747bb62556d4178f5039822638d219381eb8488d84370503154cc03,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1728918472871037421,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-54rrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8ab0de-c204-46f5-a725-5dcd9eff59d8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6762e30b49a928c9391f017d8bf782823c7777cf1fab83d160db4ebf055e519c,PodSandboxId:d3a7cdecaacf24ac8239e552cacac1cfb68a89a56a497a73033d798ed1c5a708,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728918461828949591,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 808894b816cffed524db94d6e34a052d,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ac34ee741ac4e85fbce7777baead20a19c84896009f4671d0c3a9aa96182858,PodSandboxId:6ee6ad98eab10d0d44de29ae2a3704f4727b02d09dcfc02a80999ad6df9a778a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:172891846182623
6832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a005d01945afa403b756193f11f3824f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af736f6784dc6fdd444e1b9d9ab0c2c185a42d68085dcbe37a46cfec63664031,PodSandboxId:de8829f46ea7a44e25078557862449681d75c537522a6df279f3687225512725,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728918461760964338,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdbaadf2aa4ad3fc6f15ade30860d76d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:870d6b62c80ba13f1a28f47e62cae635ae185e0169f6fd474843642b7fd1b867,PodSandboxId:efda81733a2d239762e184a01912729b69a91016df06b7ba3933aa88de48c782,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728918461771646799,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16ad1f7c7ca791817a445f2eb5192551,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62baf067c7938f117bd93de058d98f004012ebe1fcf8caae9b96bcc24016757b,PodSandboxId:3eafb95cf605fed076cb225caa76fd763cdc9bb555510b96b096e6bd270b52f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728918174775447701,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a005d01945afa403b756193f11f3824f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8f9b3478-bf59-4df3-b7cf-27eeef90af4e name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:22:32 no-preload-813300 crio[711]: time="2024-10-14 15:22:32.202274711Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=365bd138-9ba4-4f2e-b410-eda0611969dc name=/runtime.v1.RuntimeService/Version
	Oct 14 15:22:32 no-preload-813300 crio[711]: time="2024-10-14 15:22:32.202404840Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=365bd138-9ba4-4f2e-b410-eda0611969dc name=/runtime.v1.RuntimeService/Version
	Oct 14 15:22:32 no-preload-813300 crio[711]: time="2024-10-14 15:22:32.203973852Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a816bc29-3913-4e5e-9235-933bf77eb081 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:22:32 no-preload-813300 crio[711]: time="2024-10-14 15:22:32.204450792Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919352204423332,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a816bc29-3913-4e5e-9235-933bf77eb081 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:22:32 no-preload-813300 crio[711]: time="2024-10-14 15:22:32.205296879Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=506d5dfa-7d8a-4689-a867-3584d745bdf2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:22:32 no-preload-813300 crio[711]: time="2024-10-14 15:22:32.205386967Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=506d5dfa-7d8a-4689-a867-3584d745bdf2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:22:32 no-preload-813300 crio[711]: time="2024-10-14 15:22:32.207385799Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2fe5212fe3ebb4271fe4f9776bdc95ea7bbd4aea70456281189b86f4d9323675,PodSandboxId:f65d8d057416b9747163461409fb02e63baaccda87f3ed4616b7db10021cb917,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728918474205353910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d79bfdf-bda5-42bf-8ddf-73d7df4855db,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03f2753934798cf6791abe08ae9795458185ad9eac0059ead3d1cb94cd908b3d,PodSandboxId:a2836801e53a0d69e53404fa44e5147c5184d5687c2b8897aac5baefa29d07c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728918473444794126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nvpvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d926987d-9c61-4bf6-83e3-97334715e1d5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:739b2529f0fdf4a73a5502b2fc856d948eccb8dbbae56a6b9d08c0413c0279ad,PodSandboxId:6a29826877e8796338270b89184f647f0265e8cf65ed745db5eef2d9500d98a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728918473219503640,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fjzn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78
50936e-8104-4e8f-a4cc-948579963790,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:842c65533db30209091d4a7fd1a556d412dd451dfa31d61fa9b9090e674419a6,PodSandboxId:2da3b6fbd747bb62556d4178f5039822638d219381eb8488d84370503154cc03,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1728918472871037421,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-54rrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8ab0de-c204-46f5-a725-5dcd9eff59d8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6762e30b49a928c9391f017d8bf782823c7777cf1fab83d160db4ebf055e519c,PodSandboxId:d3a7cdecaacf24ac8239e552cacac1cfb68a89a56a497a73033d798ed1c5a708,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728918461828949591,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 808894b816cffed524db94d6e34a052d,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ac34ee741ac4e85fbce7777baead20a19c84896009f4671d0c3a9aa96182858,PodSandboxId:6ee6ad98eab10d0d44de29ae2a3704f4727b02d09dcfc02a80999ad6df9a778a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:172891846182623
6832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a005d01945afa403b756193f11f3824f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af736f6784dc6fdd444e1b9d9ab0c2c185a42d68085dcbe37a46cfec63664031,PodSandboxId:de8829f46ea7a44e25078557862449681d75c537522a6df279f3687225512725,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728918461760964338,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdbaadf2aa4ad3fc6f15ade30860d76d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:870d6b62c80ba13f1a28f47e62cae635ae185e0169f6fd474843642b7fd1b867,PodSandboxId:efda81733a2d239762e184a01912729b69a91016df06b7ba3933aa88de48c782,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728918461771646799,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16ad1f7c7ca791817a445f2eb5192551,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62baf067c7938f117bd93de058d98f004012ebe1fcf8caae9b96bcc24016757b,PodSandboxId:3eafb95cf605fed076cb225caa76fd763cdc9bb555510b96b096e6bd270b52f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728918174775447701,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a005d01945afa403b756193f11f3824f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=506d5dfa-7d8a-4689-a867-3584d745bdf2 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:22:32 no-preload-813300 crio[711]: time="2024-10-14 15:22:32.253796200Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=998e3c85-2f50-44ca-92b8-581f6a2850d7 name=/runtime.v1.RuntimeService/Version
	Oct 14 15:22:32 no-preload-813300 crio[711]: time="2024-10-14 15:22:32.253901655Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=998e3c85-2f50-44ca-92b8-581f6a2850d7 name=/runtime.v1.RuntimeService/Version
	Oct 14 15:22:32 no-preload-813300 crio[711]: time="2024-10-14 15:22:32.255916100Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cbe7be51-99d5-4630-a89c-075576e5cff6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:22:32 no-preload-813300 crio[711]: time="2024-10-14 15:22:32.256611087Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919352256575466,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cbe7be51-99d5-4630-a89c-075576e5cff6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:22:32 no-preload-813300 crio[711]: time="2024-10-14 15:22:32.257735238Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f9f18394-430e-4dcd-b847-088316650f36 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:22:32 no-preload-813300 crio[711]: time="2024-10-14 15:22:32.257855043Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f9f18394-430e-4dcd-b847-088316650f36 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:22:32 no-preload-813300 crio[711]: time="2024-10-14 15:22:32.258272146Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2fe5212fe3ebb4271fe4f9776bdc95ea7bbd4aea70456281189b86f4d9323675,PodSandboxId:f65d8d057416b9747163461409fb02e63baaccda87f3ed4616b7db10021cb917,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728918474205353910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d79bfdf-bda5-42bf-8ddf-73d7df4855db,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03f2753934798cf6791abe08ae9795458185ad9eac0059ead3d1cb94cd908b3d,PodSandboxId:a2836801e53a0d69e53404fa44e5147c5184d5687c2b8897aac5baefa29d07c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728918473444794126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nvpvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d926987d-9c61-4bf6-83e3-97334715e1d5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:739b2529f0fdf4a73a5502b2fc856d948eccb8dbbae56a6b9d08c0413c0279ad,PodSandboxId:6a29826877e8796338270b89184f647f0265e8cf65ed745db5eef2d9500d98a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728918473219503640,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fjzn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78
50936e-8104-4e8f-a4cc-948579963790,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:842c65533db30209091d4a7fd1a556d412dd451dfa31d61fa9b9090e674419a6,PodSandboxId:2da3b6fbd747bb62556d4178f5039822638d219381eb8488d84370503154cc03,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1728918472871037421,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-54rrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8ab0de-c204-46f5-a725-5dcd9eff59d8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6762e30b49a928c9391f017d8bf782823c7777cf1fab83d160db4ebf055e519c,PodSandboxId:d3a7cdecaacf24ac8239e552cacac1cfb68a89a56a497a73033d798ed1c5a708,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728918461828949591,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 808894b816cffed524db94d6e34a052d,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ac34ee741ac4e85fbce7777baead20a19c84896009f4671d0c3a9aa96182858,PodSandboxId:6ee6ad98eab10d0d44de29ae2a3704f4727b02d09dcfc02a80999ad6df9a778a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:172891846182623
6832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a005d01945afa403b756193f11f3824f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af736f6784dc6fdd444e1b9d9ab0c2c185a42d68085dcbe37a46cfec63664031,PodSandboxId:de8829f46ea7a44e25078557862449681d75c537522a6df279f3687225512725,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728918461760964338,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdbaadf2aa4ad3fc6f15ade30860d76d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:870d6b62c80ba13f1a28f47e62cae635ae185e0169f6fd474843642b7fd1b867,PodSandboxId:efda81733a2d239762e184a01912729b69a91016df06b7ba3933aa88de48c782,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728918461771646799,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16ad1f7c7ca791817a445f2eb5192551,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62baf067c7938f117bd93de058d98f004012ebe1fcf8caae9b96bcc24016757b,PodSandboxId:3eafb95cf605fed076cb225caa76fd763cdc9bb555510b96b096e6bd270b52f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728918174775447701,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a005d01945afa403b756193f11f3824f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f9f18394-430e-4dcd-b847-088316650f36 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:22:32 no-preload-813300 crio[711]: time="2024-10-14 15:22:32.303881518Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=97e58bbf-487a-4a5d-8487-751d0868c0cf name=/runtime.v1.RuntimeService/Version
	Oct 14 15:22:32 no-preload-813300 crio[711]: time="2024-10-14 15:22:32.303972583Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=97e58bbf-487a-4a5d-8487-751d0868c0cf name=/runtime.v1.RuntimeService/Version
	Oct 14 15:22:32 no-preload-813300 crio[711]: time="2024-10-14 15:22:32.306109994Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=17c37578-1075-4a77-866d-835bf85f9c25 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:22:32 no-preload-813300 crio[711]: time="2024-10-14 15:22:32.307011058Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919352306889211,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=17c37578-1075-4a77-866d-835bf85f9c25 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:22:32 no-preload-813300 crio[711]: time="2024-10-14 15:22:32.308043972Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7a472013-4f7b-4608-90f3-e4c2c888b6b1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:22:32 no-preload-813300 crio[711]: time="2024-10-14 15:22:32.308117198Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7a472013-4f7b-4608-90f3-e4c2c888b6b1 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:22:32 no-preload-813300 crio[711]: time="2024-10-14 15:22:32.308406937Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:2fe5212fe3ebb4271fe4f9776bdc95ea7bbd4aea70456281189b86f4d9323675,PodSandboxId:f65d8d057416b9747163461409fb02e63baaccda87f3ed4616b7db10021cb917,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1728918474205353910,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d79bfdf-bda5-42bf-8ddf-73d7df4855db,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03f2753934798cf6791abe08ae9795458185ad9eac0059ead3d1cb94cd908b3d,PodSandboxId:a2836801e53a0d69e53404fa44e5147c5184d5687c2b8897aac5baefa29d07c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728918473444794126,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-nvpvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d926987d-9c61-4bf6-83e3-97334715e1d5,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:739b2529f0fdf4a73a5502b2fc856d948eccb8dbbae56a6b9d08c0413c0279ad,PodSandboxId:6a29826877e8796338270b89184f647f0265e8cf65ed745db5eef2d9500d98a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1728918473219503640,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-fjzn8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 78
50936e-8104-4e8f-a4cc-948579963790,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:842c65533db30209091d4a7fd1a556d412dd451dfa31d61fa9b9090e674419a6,PodSandboxId:2da3b6fbd747bb62556d4178f5039822638d219381eb8488d84370503154cc03,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561,State:CONTAINER_RUNNING,CreatedAt:
1728918472871037421,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-54rrd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0c8ab0de-c204-46f5-a725-5dcd9eff59d8,},Annotations:map[string]string{io.kubernetes.container.hash: 159dcc59,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6762e30b49a928c9391f017d8bf782823c7777cf1fab83d160db4ebf055e519c,PodSandboxId:d3a7cdecaacf24ac8239e552cacac1cfb68a89a56a497a73033d798ed1c5a708,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1,State:CONTAINER_RUNNING,CreatedAt:1728918461828949591,
Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 808894b816cffed524db94d6e34a052d,},Annotations:map[string]string{io.kubernetes.container.hash: d1900d79,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5ac34ee741ac4e85fbce7777baead20a19c84896009f4671d0c3a9aa96182858,PodSandboxId:6ee6ad98eab10d0d44de29ae2a3704f4727b02d09dcfc02a80999ad6df9a778a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_RUNNING,CreatedAt:172891846182623
6832,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a005d01945afa403b756193f11f3824f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:af736f6784dc6fdd444e1b9d9ab0c2c185a42d68085dcbe37a46cfec63664031,PodSandboxId:de8829f46ea7a44e25078557862449681d75c537522a6df279f3687225512725,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b,State:CONTAINER_RUNNING,CreatedAt:1728918461760964338,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fdbaadf2aa4ad3fc6f15ade30860d76d,},Annotations:map[string]string{io.kubernetes.container.hash: 12faacf7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:870d6b62c80ba13f1a28f47e62cae635ae185e0169f6fd474843642b7fd1b867,PodSandboxId:efda81733a2d239762e184a01912729b69a91016df06b7ba3933aa88de48c782,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1728918461771646799,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16ad1f7c7ca791817a445f2eb5192551,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62baf067c7938f117bd93de058d98f004012ebe1fcf8caae9b96bcc24016757b,PodSandboxId:3eafb95cf605fed076cb225caa76fd763cdc9bb555510b96b096e6bd270b52f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee,State:CONTAINER_EXITED,CreatedAt:1728918174775447701,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-813300,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a005d01945afa403b756193f11f3824f,},Annotations:map[string]string{io.kubernetes.container.hash: 7df2713b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7a472013-4f7b-4608-90f3-e4c2c888b6b1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2fe5212fe3ebb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   f65d8d057416b       storage-provisioner
	03f2753934798       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   14 minutes ago      Running             coredns                   0                   a2836801e53a0       coredns-7c65d6cfc9-nvpvl
	739b2529f0fdf       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   14 minutes ago      Running             coredns                   0                   6a29826877e87       coredns-7c65d6cfc9-fjzn8
	842c65533db30       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561   14 minutes ago      Running             kube-proxy                0                   2da3b6fbd747b       kube-proxy-54rrd
	6762e30b49a92       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1   14 minutes ago      Running             kube-controller-manager   2                   d3a7cdecaacf2       kube-controller-manager-no-preload-813300
	5ac34ee741ac4       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   14 minutes ago      Running             kube-apiserver            2                   6ee6ad98eab10       kube-apiserver-no-preload-813300
	870d6b62c80ba       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   14 minutes ago      Running             etcd                      2                   efda81733a2d2       etcd-no-preload-813300
	af736f6784dc6       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b   14 minutes ago      Running             kube-scheduler            2                   de8829f46ea7a       kube-scheduler-no-preload-813300
	62baf067c7938       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee   19 minutes ago      Exited              kube-apiserver            1                   3eafb95cf605f       kube-apiserver-no-preload-813300
	
	
	==> coredns [03f2753934798cf6791abe08ae9795458185ad9eac0059ead3d1cb94cd908b3d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [739b2529f0fdf4a73a5502b2fc856d948eccb8dbbae56a6b9d08c0413c0279ad] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-813300
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-813300
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=no-preload-813300
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_14T15_07_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 15:07:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-813300
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 15:22:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 15:18:11 +0000   Mon, 14 Oct 2024 15:07:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 15:18:11 +0000   Mon, 14 Oct 2024 15:07:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 15:18:11 +0000   Mon, 14 Oct 2024 15:07:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 15:18:11 +0000   Mon, 14 Oct 2024 15:07:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.13
	  Hostname:    no-preload-813300
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 06200cbdb49d457f814d09539b06f86f
	  System UUID:                06200cbd-b49d-457f-814d-09539b06f86f
	  Boot ID:                    45284b4f-e486-4be9-914a-4c32f145bb44
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-fjzn8                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 coredns-7c65d6cfc9-nvpvl                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 etcd-no-preload-813300                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kube-apiserver-no-preload-813300             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-no-preload-813300    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-54rrd                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-no-preload-813300             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 metrics-server-6867b74b74-8vfll              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         14m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node no-preload-813300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node no-preload-813300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node no-preload-813300 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                kubelet          Node no-preload-813300 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                kubelet          Node no-preload-813300 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                kubelet          Node no-preload-813300 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m                node-controller  Node no-preload-813300 event: Registered Node no-preload-813300 in Controller
	
	
	==> dmesg <==
	[  +0.056720] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.041792] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.330181] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.720417] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.595444] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.623929] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.063621] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.053460] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.170675] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.149821] systemd-fstab-generator[673]: Ignoring "noauto" option for root device
	[  +0.280489] systemd-fstab-generator[703]: Ignoring "noauto" option for root device
	[ +15.664271] systemd-fstab-generator[1236]: Ignoring "noauto" option for root device
	[  +0.063930] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.831964] systemd-fstab-generator[1357]: Ignoring "noauto" option for root device
	[  +5.638392] kauditd_printk_skb: 100 callbacks suppressed
	[Oct14 15:03] kauditd_printk_skb: 87 callbacks suppressed
	[Oct14 15:07] kauditd_printk_skb: 2 callbacks suppressed
	[  +1.180041] systemd-fstab-generator[3040]: Ignoring "noauto" option for root device
	[  +4.389658] kauditd_printk_skb: 58 callbacks suppressed
	[  +1.658491] systemd-fstab-generator[3363]: Ignoring "noauto" option for root device
	[  +5.376177] systemd-fstab-generator[3475]: Ignoring "noauto" option for root device
	[  +0.124744] kauditd_printk_skb: 14 callbacks suppressed
	[  +7.376643] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [870d6b62c80ba13f1a28f47e62cae635ae185e0169f6fd474843642b7fd1b867] <==
	{"level":"info","ts":"2024-10-14T15:07:42.330796Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b42979a4111f16a1 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-14T15:07:42.330821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b42979a4111f16a1 received MsgPreVoteResp from b42979a4111f16a1 at term 1"}
	{"level":"info","ts":"2024-10-14T15:07:42.330832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b42979a4111f16a1 became candidate at term 2"}
	{"level":"info","ts":"2024-10-14T15:07:42.330840Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b42979a4111f16a1 received MsgVoteResp from b42979a4111f16a1 at term 2"}
	{"level":"info","ts":"2024-10-14T15:07:42.330848Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b42979a4111f16a1 became leader at term 2"}
	{"level":"info","ts":"2024-10-14T15:07:42.330854Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b42979a4111f16a1 elected leader b42979a4111f16a1 at term 2"}
	{"level":"info","ts":"2024-10-14T15:07:42.334866Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T15:07:42.339007Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"b42979a4111f16a1","local-member-attributes":"{Name:no-preload-813300 ClientURLs:[https://192.168.61.13:2379]}","request-path":"/0/members/b42979a4111f16a1/attributes","cluster-id":"bb1e88613a134efc","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-14T15:07:42.341738Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-14T15:07:42.342143Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-14T15:07:42.344746Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-14T15:07:42.344810Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-14T15:07:42.345554Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-14T15:07:42.348964Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.13:2379"}
	{"level":"info","ts":"2024-10-14T15:07:42.346738Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bb1e88613a134efc","local-member-id":"b42979a4111f16a1","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T15:07:42.347293Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-14T15:07:42.357801Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T15:07:42.357875Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T15:07:42.359116Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-14T15:17:42.833999Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":718}
	{"level":"info","ts":"2024-10-14T15:17:42.845739Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":718,"took":"10.717364ms","hash":2204497050,"current-db-size-bytes":2359296,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2359296,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-10-14T15:17:42.845843Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2204497050,"revision":718,"compact-revision":-1}
	{"level":"warn","ts":"2024-10-14T15:22:26.536782Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.879222ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-14T15:22:26.536994Z","caller":"traceutil/trace.go:171","msg":"trace[118250248] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1190; }","duration":"102.140725ms","start":"2024-10-14T15:22:26.434829Z","end":"2024-10-14T15:22:26.536970Z","steps":["trace[118250248] 'range keys from in-memory index tree'  (duration: 101.815264ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-14T15:22:26.537768Z","caller":"traceutil/trace.go:171","msg":"trace[1552761776] transaction","detail":"{read_only:false; response_revision:1191; number_of_response:1; }","duration":"118.798629ms","start":"2024-10-14T15:22:26.418851Z","end":"2024-10-14T15:22:26.537650Z","steps":["trace[1552761776] 'process raft request'  (duration: 57.385406ms)","trace[1552761776] 'compare'  (duration: 60.657925ms)"],"step_count":2}
	
	
	==> kernel <==
	 15:22:32 up 20 min,  0 users,  load average: 0.14, 0.11, 0.09
	Linux no-preload-813300 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [5ac34ee741ac4e85fbce7777baead20a19c84896009f4671d0c3a9aa96182858] <==
	W1014 15:17:45.485077       1 handler_proxy.go:99] no RequestInfo found in the context
	E1014 15:17:45.485215       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1014 15:17:45.486076       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1014 15:17:45.487258       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1014 15:18:45.486805       1 handler_proxy.go:99] no RequestInfo found in the context
	E1014 15:18:45.486914       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1014 15:18:45.487977       1 handler_proxy.go:99] no RequestInfo found in the context
	I1014 15:18:45.487996       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1014 15:18:45.488173       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1014 15:18:45.489322       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1014 15:20:45.488978       1 handler_proxy.go:99] no RequestInfo found in the context
	E1014 15:20:45.489121       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1014 15:20:45.490400       1 handler_proxy.go:99] no RequestInfo found in the context
	E1014 15:20:45.490538       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1014 15:20:45.490586       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1014 15:20:45.491718       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [62baf067c7938f117bd93de058d98f004012ebe1fcf8caae9b96bcc24016757b] <==
	W1014 15:07:34.673062       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:34.677424       1 logging.go:55] [core] [Channel #94 SubChannel #95]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:34.777558       1 logging.go:55] [core] [Channel #106 SubChannel #107]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:34.794211       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:34.940348       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:35.078072       1 logging.go:55] [core] [Channel #34 SubChannel #35]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:35.124333       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:35.164358       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:35.189067       1 logging.go:55] [core] [Channel #82 SubChannel #83]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:35.233552       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:35.254041       1 logging.go:55] [core] [Channel #97 SubChannel #98]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:35.289063       1 logging.go:55] [core] [Channel #61 SubChannel #62]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:35.336030       1 logging.go:55] [core] [Channel #22 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:35.343801       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:35.408071       1 logging.go:55] [core] [Channel #163 SubChannel #164]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:35.433895       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:35.465939       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:35.473297       1 logging.go:55] [core] [Channel #184 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:35.490253       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:35.512340       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:35.536906       1 logging.go:55] [core] [Channel #118 SubChannel #119]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:35.998853       1 logging.go:55] [core] [Channel #145 SubChannel #146]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:39.162045       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:39.360733       1 logging.go:55] [core] [Channel #121 SubChannel #122]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1014 15:07:39.367426       1 logging.go:55] [core] [Channel #52 SubChannel #53]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [6762e30b49a928c9391f017d8bf782823c7777cf1fab83d160db4ebf055e519c] <==
	E1014 15:17:21.441811       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:17:22.003320       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:17:51.449477       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:17:52.011908       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1014 15:18:11.821300       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-813300"
	E1014 15:18:21.456817       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:18:22.022452       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:18:51.464623       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:18:52.031457       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1014 15:19:04.184740       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="359.025µs"
	I1014 15:19:17.179984       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="49.445µs"
	E1014 15:19:21.472578       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:19:22.040614       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:19:51.479631       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:19:52.050795       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:20:21.487352       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:20:22.060310       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:20:51.495060       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:20:52.069019       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:21:21.501953       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:21:22.078010       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:21:51.511337       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:21:52.086856       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1014 15:22:21.521163       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1014 15:22:22.097918       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [842c65533db30209091d4a7fd1a556d412dd451dfa31d61fa9b9090e674419a6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1014 15:07:53.714041       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1014 15:07:53.755632       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.61.13"]
	E1014 15:07:53.755916       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 15:07:53.980337       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1014 15:07:53.980400       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1014 15:07:53.980429       1 server_linux.go:169] "Using iptables Proxier"
	I1014 15:07:53.991604       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 15:07:53.992005       1 server.go:483] "Version info" version="v1.31.1"
	I1014 15:07:53.992036       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 15:07:53.993896       1 config.go:199] "Starting service config controller"
	I1014 15:07:53.993942       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 15:07:53.993978       1 config.go:105] "Starting endpoint slice config controller"
	I1014 15:07:53.993982       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 15:07:53.995137       1 config.go:328] "Starting node config controller"
	I1014 15:07:53.995206       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 15:07:54.103772       1 shared_informer.go:320] Caches are synced for node config
	I1014 15:07:54.103853       1 shared_informer.go:320] Caches are synced for service config
	I1014 15:07:54.103894       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [af736f6784dc6fdd444e1b9d9ab0c2c185a42d68085dcbe37a46cfec63664031] <==
	W1014 15:07:44.556786       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1014 15:07:44.556838       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 15:07:44.556874       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1014 15:07:44.556912       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 15:07:45.478583       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1014 15:07:45.478869       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 15:07:45.539627       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1014 15:07:45.539981       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 15:07:45.545198       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1014 15:07:45.545371       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1014 15:07:45.583910       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1014 15:07:45.584055       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 15:07:45.660857       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1014 15:07:45.661089       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1014 15:07:45.713881       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1014 15:07:45.713992       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 15:07:45.773580       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1014 15:07:45.773783       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 15:07:45.808430       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1014 15:07:45.808483       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1014 15:07:45.835530       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1014 15:07:45.835590       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 15:07:45.857456       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1014 15:07:45.857510       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 15:07:48.233732       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 14 15:21:22 no-preload-813300 kubelet[3370]: E1014 15:21:22.164931    3370 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8vfll" podUID="cf3594da-9896-49ed-b47f-5bbea36c9aaf"
	Oct 14 15:21:27 no-preload-813300 kubelet[3370]: E1014 15:21:27.360421    3370 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919287360076112,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:21:27 no-preload-813300 kubelet[3370]: E1014 15:21:27.360515    3370 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919287360076112,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:21:35 no-preload-813300 kubelet[3370]: E1014 15:21:35.166152    3370 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8vfll" podUID="cf3594da-9896-49ed-b47f-5bbea36c9aaf"
	Oct 14 15:21:37 no-preload-813300 kubelet[3370]: E1014 15:21:37.361932    3370 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919297361523428,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:21:37 no-preload-813300 kubelet[3370]: E1014 15:21:37.361982    3370 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919297361523428,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:21:47 no-preload-813300 kubelet[3370]: E1014 15:21:47.217809    3370 iptables.go:577] "Could not set up iptables canary" err=<
	Oct 14 15:21:47 no-preload-813300 kubelet[3370]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Oct 14 15:21:47 no-preload-813300 kubelet[3370]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Oct 14 15:21:47 no-preload-813300 kubelet[3370]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Oct 14 15:21:47 no-preload-813300 kubelet[3370]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Oct 14 15:21:47 no-preload-813300 kubelet[3370]: E1014 15:21:47.364751    3370 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919307363950141,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:21:47 no-preload-813300 kubelet[3370]: E1014 15:21:47.364854    3370 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919307363950141,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:21:50 no-preload-813300 kubelet[3370]: E1014 15:21:50.164977    3370 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8vfll" podUID="cf3594da-9896-49ed-b47f-5bbea36c9aaf"
	Oct 14 15:21:57 no-preload-813300 kubelet[3370]: E1014 15:21:57.366879    3370 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919317366434187,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:21:57 no-preload-813300 kubelet[3370]: E1014 15:21:57.366935    3370 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919317366434187,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:22:01 no-preload-813300 kubelet[3370]: E1014 15:22:01.166287    3370 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8vfll" podUID="cf3594da-9896-49ed-b47f-5bbea36c9aaf"
	Oct 14 15:22:07 no-preload-813300 kubelet[3370]: E1014 15:22:07.369511    3370 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919327368854827,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:22:07 no-preload-813300 kubelet[3370]: E1014 15:22:07.369568    3370 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919327368854827,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:22:14 no-preload-813300 kubelet[3370]: E1014 15:22:14.165495    3370 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8vfll" podUID="cf3594da-9896-49ed-b47f-5bbea36c9aaf"
	Oct 14 15:22:17 no-preload-813300 kubelet[3370]: E1014 15:22:17.371857    3370 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919337371327965,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:22:17 no-preload-813300 kubelet[3370]: E1014 15:22:17.372135    3370 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919337371327965,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:22:26 no-preload-813300 kubelet[3370]: E1014 15:22:26.164805    3370 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-8vfll" podUID="cf3594da-9896-49ed-b47f-5bbea36c9aaf"
	Oct 14 15:22:27 no-preload-813300 kubelet[3370]: E1014 15:22:27.378856    3370 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919347374869698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 14 15:22:27 no-preload-813300 kubelet[3370]: E1014 15:22:27.379382    3370 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919347374869698,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101000,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [2fe5212fe3ebb4271fe4f9776bdc95ea7bbd4aea70456281189b86f4d9323675] <==
	I1014 15:07:54.415632       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1014 15:07:54.431382       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1014 15:07:54.431443       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1014 15:07:54.456955       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1014 15:07:54.477136       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"544b6cc6-ec59-4c30-9bdb-e6b0c42eb5fd", APIVersion:"v1", ResourceVersion:"433", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-813300_333f6081-642e-4e06-a2d9-fe0ec4a4ed66 became leader
	I1014 15:07:54.477626       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-813300_333f6081-642e-4e06-a2d9-fe0ec4a4ed66!
	I1014 15:07:54.578836       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-813300_333f6081-642e-4e06-a2d9-fe0ec4a4ed66!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-813300 -n no-preload-813300
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-813300 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-8vfll
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-813300 describe pod metrics-server-6867b74b74-8vfll
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-813300 describe pod metrics-server-6867b74b74-8vfll: exit status 1 (74.055755ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-8vfll" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-813300 describe pod metrics-server-6867b74b74-8vfll: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (327.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (138.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
E1014 15:20:29.460155   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/kindnet-517678/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
E1014 15:21:06.400877   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: no such file or directory" logger="UnhandledError"
E1014 15:21:07.152756   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/calico-517678/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
E1014 15:21:22.162624   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/custom-flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
E1014 15:21:40.075553   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/functional-917108/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.138:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.138:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-399767 -n old-k8s-version-399767
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-399767 -n old-k8s-version-399767: exit status 2 (241.061075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-399767" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-399767 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-399767 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.791µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-399767 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-399767 -n old-k8s-version-399767
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-399767 -n old-k8s-version-399767: exit status 2 (222.5711ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-399767 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-399767 logs -n 25: (1.567752019s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-517678 sudo cat                              | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-517678 sudo                                  | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-517678 sudo                                  | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-517678 sudo                                  | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-517678 sudo find                             | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-517678 sudo crio                             | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-517678                                       | bridge-517678                | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	| delete  | -p                                                     | disable-driver-mounts-887610 | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:53 UTC |
	|         | disable-driver-mounts-887610                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:53 UTC | 14 Oct 24 14:55 UTC |
	|         | default-k8s-diff-port-201291                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-813300             | no-preload-813300            | jenkins | v1.34.0 | 14 Oct 24 14:54 UTC | 14 Oct 24 14:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-813300                                   | no-preload-813300            | jenkins | v1.34.0 | 14 Oct 24 14:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-989166            | embed-certs-989166           | jenkins | v1.34.0 | 14 Oct 24 14:54 UTC | 14 Oct 24 14:54 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-989166                                  | embed-certs-989166           | jenkins | v1.34.0 | 14 Oct 24 14:54 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-201291  | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:55 UTC | 14 Oct 24 14:55 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:55 UTC |                     |
	|         | default-k8s-diff-port-201291                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-813300                  | no-preload-813300            | jenkins | v1.34.0 | 14 Oct 24 14:56 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-813300                                   | no-preload-813300            | jenkins | v1.34.0 | 14 Oct 24 14:56 UTC | 14 Oct 24 15:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-399767        | old-k8s-version-399767       | jenkins | v1.34.0 | 14 Oct 24 14:56 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-989166                 | embed-certs-989166           | jenkins | v1.34.0 | 14 Oct 24 14:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-989166                                  | embed-certs-989166           | jenkins | v1.34.0 | 14 Oct 24 14:57 UTC | 14 Oct 24 15:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-201291       | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:57 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-201291 | jenkins | v1.34.0 | 14 Oct 24 14:57 UTC | 14 Oct 24 15:06 UTC |
	|         | default-k8s-diff-port-201291                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-399767                              | old-k8s-version-399767       | jenkins | v1.34.0 | 14 Oct 24 14:58 UTC | 14 Oct 24 14:58 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-399767             | old-k8s-version-399767       | jenkins | v1.34.0 | 14 Oct 24 14:58 UTC | 14 Oct 24 14:58 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-399767                              | old-k8s-version-399767       | jenkins | v1.34.0 | 14 Oct 24 14:58 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 14:58:18
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 14:58:18.000027   72639 out.go:345] Setting OutFile to fd 1 ...
	I1014 14:58:18.000165   72639 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:58:18.000176   72639 out.go:358] Setting ErrFile to fd 2...
	I1014 14:58:18.000189   72639 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:58:18.000390   72639 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
	I1014 14:58:18.000911   72639 out.go:352] Setting JSON to false
	I1014 14:58:18.001828   72639 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":6048,"bootTime":1728911850,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 14:58:18.001919   72639 start.go:139] virtualization: kvm guest
	I1014 14:58:18.004056   72639 out.go:177] * [old-k8s-version-399767] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1014 14:58:18.005382   72639 notify.go:220] Checking for updates...
	I1014 14:58:18.005437   72639 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 14:58:18.006939   72639 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 14:58:18.008275   72639 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 14:58:18.009565   72639 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 14:58:18.010773   72639 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 14:58:18.011941   72639 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 14:58:18.013472   72639 config.go:182] Loaded profile config "old-k8s-version-399767": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1014 14:58:18.013833   72639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:58:18.013892   72639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:58:18.028372   72639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44497
	I1014 14:58:18.028786   72639 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:58:18.029355   72639 main.go:141] libmachine: Using API Version  1
	I1014 14:58:18.029375   72639 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:58:18.029671   72639 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:58:18.029827   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 14:58:18.031644   72639 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1014 14:58:18.033229   72639 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 14:58:18.033524   72639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:58:18.033565   72639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:58:18.048210   72639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34273
	I1014 14:58:18.048620   72639 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:58:18.049080   72639 main.go:141] libmachine: Using API Version  1
	I1014 14:58:18.049102   72639 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:58:18.049377   72639 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:58:18.049550   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 14:58:18.084664   72639 out.go:177] * Using the kvm2 driver based on existing profile
	I1014 14:58:18.085942   72639 start.go:297] selected driver: kvm2
	I1014 14:58:18.085952   72639 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-399767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-399767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.138 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 14:58:18.086042   72639 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 14:58:18.086707   72639 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 14:58:18.086795   72639 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19790-7836/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 14:58:18.101802   72639 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1014 14:58:18.102194   72639 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 14:58:18.102224   72639 cni.go:84] Creating CNI manager for ""
	I1014 14:58:18.102263   72639 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 14:58:18.102315   72639 start.go:340] cluster config:
	{Name:old-k8s-version-399767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-399767 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.138 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 14:58:18.102441   72639 iso.go:125] acquiring lock: {Name:mk2e2e780b05ead4007a93e6b56d28c6081926bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 14:58:18.105418   72639 out.go:177] * Starting "old-k8s-version-399767" primary control-plane node in "old-k8s-version-399767" cluster
	I1014 14:58:16.182868   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:18.106656   72639 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1014 14:58:18.106696   72639 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1014 14:58:18.106708   72639 cache.go:56] Caching tarball of preloaded images
	I1014 14:58:18.106790   72639 preload.go:172] Found /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1014 14:58:18.106800   72639 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1014 14:58:18.106889   72639 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/config.json ...
	I1014 14:58:18.107063   72639 start.go:360] acquireMachinesLock for old-k8s-version-399767: {Name:mk0b928e3a7c606d1470b2064477a22c5e59969d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 14:58:22.262902   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:25.334877   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:31.414867   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:34.486863   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:40.566883   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:43.638929   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:49.718856   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:52.790946   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:58:58.870883   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:01.942844   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:08.022831   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:11.094893   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:17.174897   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:20.246818   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:26.326911   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:29.398852   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:35.478877   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:38.550829   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:44.630857   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:47.702856   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:53.782842   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 14:59:56.854890   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:02.934894   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:06.006879   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:12.086905   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:15.158856   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:21.238905   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:24.310889   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:30.390878   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:33.462909   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:39.542866   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:42.614929   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:48.694859   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:51.766865   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:00:57.846913   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:01:00.918859   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:01:06.998892   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:01:10.070810   71679 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.61.13:22: connect: no route to host
	I1014 15:01:13.075950   72173 start.go:364] duration metric: took 3m43.687804446s to acquireMachinesLock for "embed-certs-989166"
	I1014 15:01:13.076005   72173 start.go:96] Skipping create...Using existing machine configuration
	I1014 15:01:13.076011   72173 fix.go:54] fixHost starting: 
	I1014 15:01:13.076341   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:01:13.076386   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:01:13.092168   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41771
	I1014 15:01:13.092686   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:01:13.093180   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:01:13.093204   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:01:13.093560   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:01:13.093749   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:13.093889   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetState
	I1014 15:01:13.095639   72173 fix.go:112] recreateIfNeeded on embed-certs-989166: state=Stopped err=<nil>
	I1014 15:01:13.095665   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	W1014 15:01:13.095827   72173 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 15:01:13.097909   72173 out.go:177] * Restarting existing kvm2 VM for "embed-certs-989166" ...
	I1014 15:01:13.099253   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Start
	I1014 15:01:13.099433   72173 main.go:141] libmachine: (embed-certs-989166) Ensuring networks are active...
	I1014 15:01:13.100328   72173 main.go:141] libmachine: (embed-certs-989166) Ensuring network default is active
	I1014 15:01:13.100683   72173 main.go:141] libmachine: (embed-certs-989166) Ensuring network mk-embed-certs-989166 is active
	I1014 15:01:13.101062   72173 main.go:141] libmachine: (embed-certs-989166) Getting domain xml...
	I1014 15:01:13.101867   72173 main.go:141] libmachine: (embed-certs-989166) Creating domain...
	I1014 15:01:13.073323   71679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 15:01:13.073356   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetMachineName
	I1014 15:01:13.073658   71679 buildroot.go:166] provisioning hostname "no-preload-813300"
	I1014 15:01:13.073682   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetMachineName
	I1014 15:01:13.073854   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:01:13.075825   71679 machine.go:96] duration metric: took 4m37.425006s to provisionDockerMachine
	I1014 15:01:13.075866   71679 fix.go:56] duration metric: took 4m37.446829923s for fixHost
	I1014 15:01:13.075872   71679 start.go:83] releasing machines lock for "no-preload-813300", held for 4m37.446848059s
	W1014 15:01:13.075889   71679 start.go:714] error starting host: provision: host is not running
	W1014 15:01:13.075983   71679 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1014 15:01:13.075992   71679 start.go:729] Will try again in 5 seconds ...
	I1014 15:01:14.319338   72173 main.go:141] libmachine: (embed-certs-989166) Waiting to get IP...
	I1014 15:01:14.320167   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:14.320582   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:14.320651   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:14.320577   73268 retry.go:31] will retry after 213.073722ms: waiting for machine to come up
	I1014 15:01:14.534913   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:14.535353   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:14.535375   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:14.535306   73268 retry.go:31] will retry after 316.205029ms: waiting for machine to come up
	I1014 15:01:14.852769   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:14.853201   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:14.853261   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:14.853201   73268 retry.go:31] will retry after 399.414867ms: waiting for machine to come up
	I1014 15:01:15.253657   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:15.253955   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:15.253979   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:15.253917   73268 retry.go:31] will retry after 537.097034ms: waiting for machine to come up
	I1014 15:01:15.792362   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:15.792736   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:15.792763   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:15.792703   73268 retry.go:31] will retry after 594.582114ms: waiting for machine to come up
	I1014 15:01:16.388419   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:16.388838   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:16.388869   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:16.388793   73268 retry.go:31] will retry after 814.814512ms: waiting for machine to come up
	I1014 15:01:17.204791   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:17.205229   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:17.205255   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:17.205176   73268 retry.go:31] will retry after 971.673961ms: waiting for machine to come up
	I1014 15:01:18.178701   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:18.179100   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:18.179130   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:18.179048   73268 retry.go:31] will retry after 941.576822ms: waiting for machine to come up
	I1014 15:01:19.122097   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:19.122488   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:19.122514   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:19.122453   73268 retry.go:31] will retry after 1.5308999s: waiting for machine to come up
	I1014 15:01:18.077601   71679 start.go:360] acquireMachinesLock for no-preload-813300: {Name:mk0b928e3a7c606d1470b2064477a22c5e59969d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1014 15:01:20.655098   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:20.655524   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:20.655550   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:20.655475   73268 retry.go:31] will retry after 1.590510545s: waiting for machine to come up
	I1014 15:01:22.248128   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:22.248551   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:22.248572   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:22.248511   73268 retry.go:31] will retry after 1.965898839s: waiting for machine to come up
	I1014 15:01:24.215742   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:24.216187   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:24.216240   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:24.216136   73268 retry.go:31] will retry after 3.476459931s: waiting for machine to come up
	I1014 15:01:27.696804   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:27.697201   72173 main.go:141] libmachine: (embed-certs-989166) DBG | unable to find current IP address of domain embed-certs-989166 in network mk-embed-certs-989166
	I1014 15:01:27.697254   72173 main.go:141] libmachine: (embed-certs-989166) DBG | I1014 15:01:27.697175   73268 retry.go:31] will retry after 3.212757582s: waiting for machine to come up
	I1014 15:01:32.235659   72390 start.go:364] duration metric: took 3m35.715993521s to acquireMachinesLock for "default-k8s-diff-port-201291"
	I1014 15:01:32.235710   72390 start.go:96] Skipping create...Using existing machine configuration
	I1014 15:01:32.235718   72390 fix.go:54] fixHost starting: 
	I1014 15:01:32.236084   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:01:32.236134   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:01:32.253294   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46045
	I1014 15:01:32.253760   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:01:32.254255   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:01:32.254275   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:01:32.254616   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:01:32.254797   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:32.254973   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetState
	I1014 15:01:32.256494   72390 fix.go:112] recreateIfNeeded on default-k8s-diff-port-201291: state=Stopped err=<nil>
	I1014 15:01:32.256523   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	W1014 15:01:32.256683   72390 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 15:01:32.258989   72390 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-201291" ...
	I1014 15:01:30.911781   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:30.912283   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has current primary IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:30.912313   72173 main.go:141] libmachine: (embed-certs-989166) Found IP for machine: 192.168.39.41
	I1014 15:01:30.912331   72173 main.go:141] libmachine: (embed-certs-989166) Reserving static IP address...
	I1014 15:01:30.912771   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "embed-certs-989166", mac: "52:54:00:ee:96:19", ip: "192.168.39.41"} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:30.912799   72173 main.go:141] libmachine: (embed-certs-989166) DBG | skip adding static IP to network mk-embed-certs-989166 - found existing host DHCP lease matching {name: "embed-certs-989166", mac: "52:54:00:ee:96:19", ip: "192.168.39.41"}
	I1014 15:01:30.912806   72173 main.go:141] libmachine: (embed-certs-989166) Reserved static IP address: 192.168.39.41
	I1014 15:01:30.912815   72173 main.go:141] libmachine: (embed-certs-989166) Waiting for SSH to be available...
	I1014 15:01:30.912822   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Getting to WaitForSSH function...
	I1014 15:01:30.914919   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:30.915273   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:30.915310   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:30.915392   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Using SSH client type: external
	I1014 15:01:30.915414   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa (-rw-------)
	I1014 15:01:30.915465   72173 main.go:141] libmachine: (embed-certs-989166) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.41 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 15:01:30.915489   72173 main.go:141] libmachine: (embed-certs-989166) DBG | About to run SSH command:
	I1014 15:01:30.915503   72173 main.go:141] libmachine: (embed-certs-989166) DBG | exit 0
	I1014 15:01:31.042620   72173 main.go:141] libmachine: (embed-certs-989166) DBG | SSH cmd err, output: <nil>: 
	I1014 15:01:31.043061   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetConfigRaw
	I1014 15:01:31.043675   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetIP
	I1014 15:01:31.046338   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.046679   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.046720   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.046941   72173 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/config.json ...
	I1014 15:01:31.047132   72173 machine.go:93] provisionDockerMachine start ...
	I1014 15:01:31.047149   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:31.047348   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.049453   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.049835   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.049857   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.049978   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:31.050147   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.050282   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.050419   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:31.050573   72173 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:31.050814   72173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1014 15:01:31.050828   72173 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 15:01:31.163270   72173 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 15:01:31.163306   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetMachineName
	I1014 15:01:31.163614   72173 buildroot.go:166] provisioning hostname "embed-certs-989166"
	I1014 15:01:31.163644   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetMachineName
	I1014 15:01:31.163821   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.166684   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.167009   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.167040   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.167157   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:31.167416   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.167582   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.167718   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:31.167857   72173 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:31.168025   72173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1014 15:01:31.168040   72173 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-989166 && echo "embed-certs-989166" | sudo tee /etc/hostname
	I1014 15:01:31.292369   72173 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-989166
	
	I1014 15:01:31.292405   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.295057   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.295425   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.295449   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.295713   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:31.295915   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.296076   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.296220   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:31.296395   72173 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:31.296552   72173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1014 15:01:31.296567   72173 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-989166' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-989166/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-989166' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 15:01:31.411080   72173 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 15:01:31.411112   72173 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 15:01:31.411131   72173 buildroot.go:174] setting up certificates
	I1014 15:01:31.411142   72173 provision.go:84] configureAuth start
	I1014 15:01:31.411150   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetMachineName
	I1014 15:01:31.411396   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetIP
	I1014 15:01:31.413972   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.414319   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.414341   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.414502   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.416775   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.417092   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.417113   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.417278   72173 provision.go:143] copyHostCerts
	I1014 15:01:31.417340   72173 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 15:01:31.417353   72173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 15:01:31.417437   72173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 15:01:31.417549   72173 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 15:01:31.417559   72173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 15:01:31.417600   72173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 15:01:31.417677   72173 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 15:01:31.417687   72173 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 15:01:31.417721   72173 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 15:01:31.417788   72173 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.embed-certs-989166 san=[127.0.0.1 192.168.39.41 embed-certs-989166 localhost minikube]
	I1014 15:01:31.599973   72173 provision.go:177] copyRemoteCerts
	I1014 15:01:31.600034   72173 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 15:01:31.600060   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.602964   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.603270   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.603296   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.603502   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:31.603665   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.603821   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:31.603949   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:01:31.688890   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 15:01:31.713474   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1014 15:01:31.737692   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 15:01:31.760955   72173 provision.go:87] duration metric: took 349.799595ms to configureAuth
	I1014 15:01:31.760986   72173 buildroot.go:189] setting minikube options for container-runtime
	I1014 15:01:31.761172   72173 config.go:182] Loaded profile config "embed-certs-989166": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 15:01:31.761244   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.763800   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.764149   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.764181   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.764339   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:31.764494   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.764636   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.764732   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:31.764852   72173 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:31.765002   72173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1014 15:01:31.765016   72173 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 15:01:31.992783   72173 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 15:01:31.992811   72173 machine.go:96] duration metric: took 945.667058ms to provisionDockerMachine
	I1014 15:01:31.992823   72173 start.go:293] postStartSetup for "embed-certs-989166" (driver="kvm2")
	I1014 15:01:31.992834   72173 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 15:01:31.992848   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:31.993203   72173 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 15:01:31.993235   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:31.995966   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.996380   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:31.996418   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:31.996538   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:31.996714   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:31.996864   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:31.997003   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:01:32.081931   72173 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 15:01:32.086191   72173 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 15:01:32.086218   72173 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 15:01:32.086287   72173 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 15:01:32.086368   72173 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 15:01:32.086455   72173 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 15:01:32.096414   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:01:32.120348   72173 start.go:296] duration metric: took 127.509685ms for postStartSetup
	I1014 15:01:32.120392   72173 fix.go:56] duration metric: took 19.044380323s for fixHost
	I1014 15:01:32.120412   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:32.123024   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.123435   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:32.123465   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.123649   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:32.123832   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:32.123986   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:32.124152   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:32.124288   72173 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:32.124487   72173 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.39.41 22 <nil> <nil>}
	I1014 15:01:32.124502   72173 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 15:01:32.235487   72173 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728918092.208431219
	
	I1014 15:01:32.235513   72173 fix.go:216] guest clock: 1728918092.208431219
	I1014 15:01:32.235522   72173 fix.go:229] Guest: 2024-10-14 15:01:32.208431219 +0000 UTC Remote: 2024-10-14 15:01:32.12039587 +0000 UTC m=+242.874215269 (delta=88.035349ms)
	I1014 15:01:32.235559   72173 fix.go:200] guest clock delta is within tolerance: 88.035349ms
	I1014 15:01:32.235572   72173 start.go:83] releasing machines lock for "embed-certs-989166", held for 19.159587089s
	I1014 15:01:32.235600   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:32.235877   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetIP
	I1014 15:01:32.238642   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.238995   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:32.239025   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.239175   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:32.239719   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:32.239891   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:01:32.239978   72173 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 15:01:32.240031   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:32.240091   72173 ssh_runner.go:195] Run: cat /version.json
	I1014 15:01:32.240115   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:01:32.242742   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.243102   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:32.243128   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.243177   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.243275   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:32.243482   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:32.243653   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:32.243664   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:32.243676   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:32.243811   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:01:32.243822   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:01:32.243929   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:01:32.244050   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:01:32.244168   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:01:32.357542   72173 ssh_runner.go:195] Run: systemctl --version
	I1014 15:01:32.365113   72173 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 15:01:32.510557   72173 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 15:01:32.516545   72173 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 15:01:32.516628   72173 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 15:01:32.533449   72173 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 15:01:32.533473   72173 start.go:495] detecting cgroup driver to use...
	I1014 15:01:32.533549   72173 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 15:01:32.549503   72173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 15:01:32.563126   72173 docker.go:217] disabling cri-docker service (if available) ...
	I1014 15:01:32.563184   72173 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 15:01:32.576972   72173 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 15:01:32.591047   72173 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 15:01:32.704839   72173 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 15:01:32.844770   72173 docker.go:233] disabling docker service ...
	I1014 15:01:32.844855   72173 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 15:01:32.859524   72173 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 15:01:32.872297   72173 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 15:01:33.014291   72173 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 15:01:33.136889   72173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 15:01:33.151656   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 15:01:33.170504   72173 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 15:01:33.170575   72173 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.180894   72173 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 15:01:33.180968   72173 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.192268   72173 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.203509   72173 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.215958   72173 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 15:01:33.227981   72173 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.241615   72173 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.261168   72173 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:33.273098   72173 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 15:01:33.284050   72173 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 15:01:33.284225   72173 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 15:01:33.299547   72173 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 15:01:33.310259   72173 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:01:33.426563   72173 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 15:01:33.526759   72173 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 15:01:33.526817   72173 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 15:01:33.532297   72173 start.go:563] Will wait 60s for crictl version
	I1014 15:01:33.532356   72173 ssh_runner.go:195] Run: which crictl
	I1014 15:01:33.536385   72173 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 15:01:33.576222   72173 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 15:01:33.576305   72173 ssh_runner.go:195] Run: crio --version
	I1014 15:01:33.604603   72173 ssh_runner.go:195] Run: crio --version
	I1014 15:01:33.636261   72173 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1014 15:01:33.637497   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetIP
	I1014 15:01:33.640450   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:33.640768   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:01:33.640806   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:01:33.641001   72173 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1014 15:01:33.645241   72173 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:01:33.658028   72173 kubeadm.go:883] updating cluster {Name:embed-certs-989166 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:embed-certs-989166 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 15:01:33.658181   72173 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 15:01:33.658246   72173 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:01:33.695188   72173 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1014 15:01:33.695261   72173 ssh_runner.go:195] Run: which lz4
	I1014 15:01:33.699735   72173 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 15:01:33.704540   72173 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 15:01:33.704576   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1014 15:01:32.260401   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Start
	I1014 15:01:32.260569   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Ensuring networks are active...
	I1014 15:01:32.261176   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Ensuring network default is active
	I1014 15:01:32.261498   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Ensuring network mk-default-k8s-diff-port-201291 is active
	I1014 15:01:32.261795   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Getting domain xml...
	I1014 15:01:32.262414   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Creating domain...
	I1014 15:01:33.520115   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting to get IP...
	I1014 15:01:33.521127   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:33.521518   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:33.521609   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:33.521520   73405 retry.go:31] will retry after 278.409801ms: waiting for machine to come up
	I1014 15:01:33.802289   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:33.802720   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:33.802744   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:33.802688   73405 retry.go:31] will retry after 362.923826ms: waiting for machine to come up
	I1014 15:01:34.167836   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:34.168228   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:34.168273   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:34.168163   73405 retry.go:31] will retry after 315.156371ms: waiting for machine to come up
	I1014 15:01:34.485445   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:34.485855   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:34.485876   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:34.485840   73405 retry.go:31] will retry after 573.46626ms: waiting for machine to come up
	I1014 15:01:35.061472   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:35.061997   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:35.062027   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:35.061965   73405 retry.go:31] will retry after 519.420022ms: waiting for machine to come up
	I1014 15:01:35.582694   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:35.583130   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:35.583155   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:35.583062   73405 retry.go:31] will retry after 661.055324ms: waiting for machine to come up
	I1014 15:01:36.245525   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:36.245876   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:36.245902   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:36.245834   73405 retry.go:31] will retry after 870.411428ms: waiting for machine to come up
	I1014 15:01:35.120269   72173 crio.go:462] duration metric: took 1.42058504s to copy over tarball
	I1014 15:01:35.120372   72173 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 15:01:37.206126   72173 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.08572724s)
	I1014 15:01:37.206168   72173 crio.go:469] duration metric: took 2.085859852s to extract the tarball
	I1014 15:01:37.206176   72173 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 15:01:37.243007   72173 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:01:37.289639   72173 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 15:01:37.289667   72173 cache_images.go:84] Images are preloaded, skipping loading
	I1014 15:01:37.289678   72173 kubeadm.go:934] updating node { 192.168.39.41 8443 v1.31.1 crio true true} ...
	I1014 15:01:37.289793   72173 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-989166 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.41
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-989166 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 15:01:37.289878   72173 ssh_runner.go:195] Run: crio config
	I1014 15:01:37.348641   72173 cni.go:84] Creating CNI manager for ""
	I1014 15:01:37.348665   72173 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:01:37.348684   72173 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 15:01:37.348711   72173 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.41 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-989166 NodeName:embed-certs-989166 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.41"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.41 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 15:01:37.348861   72173 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.41
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-989166"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.41"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.41"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 15:01:37.348925   72173 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 15:01:37.359204   72173 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 15:01:37.359272   72173 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 15:01:37.368810   72173 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1014 15:01:37.385402   72173 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 15:01:37.401828   72173 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2295 bytes)
	I1014 15:01:37.418811   72173 ssh_runner.go:195] Run: grep 192.168.39.41	control-plane.minikube.internal$ /etc/hosts
	I1014 15:01:37.422655   72173 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.41	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:01:37.434567   72173 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:01:37.561408   72173 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:01:37.579549   72173 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166 for IP: 192.168.39.41
	I1014 15:01:37.579577   72173 certs.go:194] generating shared ca certs ...
	I1014 15:01:37.579596   72173 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:01:37.579766   72173 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 15:01:37.579878   72173 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 15:01:37.579894   72173 certs.go:256] generating profile certs ...
	I1014 15:01:37.579998   72173 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/client.key
	I1014 15:01:37.580079   72173 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/apiserver.key.8939f8c2
	I1014 15:01:37.580148   72173 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/proxy-client.key
	I1014 15:01:37.580316   72173 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 15:01:37.580364   72173 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 15:01:37.580376   72173 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 15:01:37.580413   72173 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 15:01:37.580445   72173 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 15:01:37.580482   72173 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 15:01:37.580536   72173 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:01:37.581259   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 15:01:37.632130   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 15:01:37.678608   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 15:01:37.705377   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 15:01:37.731897   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1014 15:01:37.775043   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 15:01:37.801653   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 15:01:37.826547   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/embed-certs-989166/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 15:01:37.852086   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 15:01:37.878715   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 15:01:37.905883   72173 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 15:01:37.932458   72173 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 15:01:37.951362   72173 ssh_runner.go:195] Run: openssl version
	I1014 15:01:37.957730   72173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 15:01:37.969936   72173 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:01:37.974871   72173 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:01:37.974931   72173 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:01:37.981060   72173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 15:01:37.992086   72173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 15:01:38.003528   72173 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 15:01:38.008267   72173 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 15:01:38.008332   72173 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 15:01:38.014243   72173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 15:01:38.025272   72173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 15:01:38.036191   72173 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 15:01:38.040751   72173 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 15:01:38.040804   72173 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 15:01:38.046605   72173 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 15:01:38.057815   72173 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 15:01:38.062497   72173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 15:01:38.068889   72173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 15:01:38.075278   72173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 15:01:38.081663   72173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 15:01:38.087892   72173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 15:01:38.093748   72173 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 15:01:38.099807   72173 kubeadm.go:392] StartCluster: {Name:embed-certs-989166 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:embed-certs-989166 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 15:01:38.099912   72173 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 15:01:38.099960   72173 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:01:38.140896   72173 cri.go:89] found id: ""
	I1014 15:01:38.140973   72173 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 15:01:38.151443   72173 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1014 15:01:38.151462   72173 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1014 15:01:38.151512   72173 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 15:01:38.161419   72173 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 15:01:38.162357   72173 kubeconfig.go:125] found "embed-certs-989166" server: "https://192.168.39.41:8443"
	I1014 15:01:38.164328   72173 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 15:01:38.174731   72173 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.41
	I1014 15:01:38.174767   72173 kubeadm.go:1160] stopping kube-system containers ...
	I1014 15:01:38.174782   72173 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1014 15:01:38.174849   72173 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:01:38.214903   72173 cri.go:89] found id: ""
	I1014 15:01:38.214982   72173 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1014 15:01:38.232891   72173 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:01:38.242711   72173 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:01:38.242735   72173 kubeadm.go:157] found existing configuration files:
	
	I1014 15:01:38.242793   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:01:38.251939   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:01:38.252019   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:01:38.262108   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:01:38.271688   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:01:38.271751   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:01:38.281420   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:01:38.290693   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:01:38.290752   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:01:38.300205   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:01:38.309174   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:01:38.309236   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:01:38.318616   72173 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:01:38.328337   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:38.436297   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:37.118307   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:37.118744   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:37.118784   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:37.118706   73405 retry.go:31] will retry after 1.481454557s: waiting for machine to come up
	I1014 15:01:38.601780   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:38.602168   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:38.602212   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:38.602118   73405 retry.go:31] will retry after 1.22705177s: waiting for machine to come up
	I1014 15:01:39.831413   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:39.831889   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:39.831963   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:39.831838   73405 retry.go:31] will retry after 1.898722681s: waiting for machine to come up
	I1014 15:01:39.574410   72173 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.138075676s)
	I1014 15:01:39.574444   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:39.789417   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:39.873563   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:40.011579   72173 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:01:40.011673   72173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:01:40.511877   72173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:01:41.012608   72173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:01:41.512235   72173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:01:42.012435   72173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:01:42.047878   72173 api_server.go:72] duration metric: took 2.036298602s to wait for apiserver process to appear ...
	I1014 15:01:42.047909   72173 api_server.go:88] waiting for apiserver healthz status ...
	I1014 15:01:42.047935   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:01:44.298692   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 15:01:44.298726   72173 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 15:01:44.298743   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:01:44.317315   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 15:01:44.317353   72173 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 15:01:44.548651   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:01:44.559477   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:01:44.559513   72173 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:01:45.048060   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:01:45.057070   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:01:45.057099   72173 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:01:45.548344   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:01:45.552611   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:01:45.552640   72173 api_server.go:103] status: https://192.168.39.41:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:01:46.048314   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:01:46.054943   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 200:
	ok
	I1014 15:01:46.062740   72173 api_server.go:141] control plane version: v1.31.1
	I1014 15:01:46.062769   72173 api_server.go:131] duration metric: took 4.014851988s to wait for apiserver health ...
	I1014 15:01:46.062779   72173 cni.go:84] Creating CNI manager for ""
	I1014 15:01:46.062785   72173 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:01:46.064824   72173 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 15:01:41.731928   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:41.732483   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:41.732515   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:41.732435   73405 retry.go:31] will retry after 2.349662063s: waiting for machine to come up
	I1014 15:01:44.083975   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:44.084492   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:44.084523   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:44.084437   73405 retry.go:31] will retry after 3.472214726s: waiting for machine to come up
	I1014 15:01:46.066505   72173 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 15:01:46.092975   72173 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 15:01:46.123873   72173 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 15:01:46.142575   72173 system_pods.go:59] 8 kube-system pods found
	I1014 15:01:46.142636   72173 system_pods.go:61] "coredns-7c65d6cfc9-r8x9s" [5a00095c-8777-412a-a7af-319a03d6153e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 15:01:46.142647   72173 system_pods.go:61] "etcd-embed-certs-989166" [981d2f54-f128-4527-a7cb-a6b9c647740b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 15:01:46.142658   72173 system_pods.go:61] "kube-apiserver-embed-certs-989166" [31780b5a-6ebf-4c75-bd27-64a95193827f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 15:01:46.142668   72173 system_pods.go:61] "kube-controller-manager-embed-certs-989166" [345e7656-579a-4be9-bcf0-4117880a2988] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 15:01:46.142678   72173 system_pods.go:61] "kube-proxy-7p84k" [5d8243a8-7247-490f-9102-61008a614a67] Running
	I1014 15:01:46.142685   72173 system_pods.go:61] "kube-scheduler-embed-certs-989166" [53b4b4a4-74ec-485e-99e3-b53c2edc80ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 15:01:46.142695   72173 system_pods.go:61] "metrics-server-6867b74b74-zc8zh" [5abf22c7-d271-4c3a-8e0e-cd867142cee1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:01:46.142703   72173 system_pods.go:61] "storage-provisioner" [6860efa4-c72f-477f-b9e1-e90ddcd112b5] Running
	I1014 15:01:46.142711   72173 system_pods.go:74] duration metric: took 18.811157ms to wait for pod list to return data ...
	I1014 15:01:46.142722   72173 node_conditions.go:102] verifying NodePressure condition ...
	I1014 15:01:46.154420   72173 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 15:01:46.154449   72173 node_conditions.go:123] node cpu capacity is 2
	I1014 15:01:46.154463   72173 node_conditions.go:105] duration metric: took 11.735142ms to run NodePressure ...
	I1014 15:01:46.154483   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:46.417106   72173 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1014 15:01:46.422102   72173 kubeadm.go:739] kubelet initialised
	I1014 15:01:46.422127   72173 kubeadm.go:740] duration metric: took 4.991248ms waiting for restarted kubelet to initialise ...
	I1014 15:01:46.422135   72173 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:01:46.428014   72173 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-r8x9s" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:46.432946   72173 pod_ready.go:98] node "embed-certs-989166" hosting pod "coredns-7c65d6cfc9-r8x9s" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.432965   72173 pod_ready.go:82] duration metric: took 4.927935ms for pod "coredns-7c65d6cfc9-r8x9s" in "kube-system" namespace to be "Ready" ...
	E1014 15:01:46.432972   72173 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-989166" hosting pod "coredns-7c65d6cfc9-r8x9s" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.432979   72173 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:46.441849   72173 pod_ready.go:98] node "embed-certs-989166" hosting pod "etcd-embed-certs-989166" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.441868   72173 pod_ready.go:82] duration metric: took 8.882863ms for pod "etcd-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	E1014 15:01:46.441877   72173 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-989166" hosting pod "etcd-embed-certs-989166" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.441883   72173 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:46.446863   72173 pod_ready.go:98] node "embed-certs-989166" hosting pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.446891   72173 pod_ready.go:82] duration metric: took 4.997658ms for pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	E1014 15:01:46.446912   72173 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-989166" hosting pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.446922   72173 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:46.526949   72173 pod_ready.go:98] node "embed-certs-989166" hosting pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.526972   72173 pod_ready.go:82] duration metric: took 80.035898ms for pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	E1014 15:01:46.526981   72173 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-989166" hosting pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-989166" has status "Ready":"False"
	I1014 15:01:46.526987   72173 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-7p84k" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:46.927217   72173 pod_ready.go:93] pod "kube-proxy-7p84k" in "kube-system" namespace has status "Ready":"True"
	I1014 15:01:46.927249   72173 pod_ready.go:82] duration metric: took 400.252417ms for pod "kube-proxy-7p84k" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:46.927263   72173 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:01:48.933034   72173 pod_ready.go:103] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"False"
	I1014 15:01:47.558671   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:47.559112   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | unable to find current IP address of domain default-k8s-diff-port-201291 in network mk-default-k8s-diff-port-201291
	I1014 15:01:47.559143   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | I1014 15:01:47.559067   73405 retry.go:31] will retry after 3.421253013s: waiting for machine to come up
	I1014 15:01:50.981602   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:50.982143   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has current primary IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:50.982167   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Found IP for machine: 192.168.50.128
	I1014 15:01:50.982186   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Reserving static IP address...
	I1014 15:01:50.982682   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-201291", mac: "52:54:00:23:03:c4", ip: "192.168.50.128"} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:50.982703   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Reserved static IP address: 192.168.50.128
	I1014 15:01:50.982722   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | skip adding static IP to network mk-default-k8s-diff-port-201291 - found existing host DHCP lease matching {name: "default-k8s-diff-port-201291", mac: "52:54:00:23:03:c4", ip: "192.168.50.128"}
	I1014 15:01:50.982743   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Getting to WaitForSSH function...
	I1014 15:01:50.982781   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Waiting for SSH to be available...
	I1014 15:01:50.985084   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:50.985609   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:50.985640   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:50.985750   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Using SSH client type: external
	I1014 15:01:50.985778   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa (-rw-------)
	I1014 15:01:50.985814   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.128 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 15:01:50.985832   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | About to run SSH command:
	I1014 15:01:50.985849   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | exit 0
	I1014 15:01:51.123927   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | SSH cmd err, output: <nil>: 
	I1014 15:01:51.124457   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetConfigRaw
	I1014 15:01:51.125106   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetIP
	I1014 15:01:51.128286   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.128716   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.128770   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.129045   72390 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/config.json ...
	I1014 15:01:51.129283   72390 machine.go:93] provisionDockerMachine start ...
	I1014 15:01:51.129308   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:51.129551   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.131756   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.132164   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.132207   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.132488   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:51.132701   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.132873   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.133022   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:51.133181   72390 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:51.133421   72390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I1014 15:01:51.133436   72390 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 15:01:51.244659   72390 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 15:01:51.244691   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetMachineName
	I1014 15:01:51.244923   72390 buildroot.go:166] provisioning hostname "default-k8s-diff-port-201291"
	I1014 15:01:51.244953   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetMachineName
	I1014 15:01:51.245149   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.248061   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.248429   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.248463   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.248521   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:51.248697   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.248887   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.249034   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:51.249227   72390 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:51.249448   72390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I1014 15:01:51.249463   72390 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-201291 && echo "default-k8s-diff-port-201291" | sudo tee /etc/hostname
	I1014 15:01:51.373260   72390 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-201291
	
	I1014 15:01:51.373293   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.376195   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.376528   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.376549   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.376752   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:51.376962   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.377159   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.377296   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:51.377446   72390 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:51.377657   72390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I1014 15:01:51.377676   72390 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-201291' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-201291/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-201291' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 15:01:52.179441   72639 start.go:364] duration metric: took 3m34.072351032s to acquireMachinesLock for "old-k8s-version-399767"
	I1014 15:01:52.179497   72639 start.go:96] Skipping create...Using existing machine configuration
	I1014 15:01:52.179505   72639 fix.go:54] fixHost starting: 
	I1014 15:01:52.179834   72639 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:01:52.179873   72639 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:01:52.196724   72639 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39389
	I1014 15:01:52.197171   72639 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:01:52.197649   72639 main.go:141] libmachine: Using API Version  1
	I1014 15:01:52.197673   72639 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:01:52.198010   72639 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:01:52.198191   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:01:52.198337   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetState
	I1014 15:01:52.199789   72639 fix.go:112] recreateIfNeeded on old-k8s-version-399767: state=Stopped err=<nil>
	I1014 15:01:52.199826   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	W1014 15:01:52.199998   72639 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 15:01:52.202220   72639 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-399767" ...
	I1014 15:01:52.203601   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .Start
	I1014 15:01:52.203771   72639 main.go:141] libmachine: (old-k8s-version-399767) Ensuring networks are active...
	I1014 15:01:52.204575   72639 main.go:141] libmachine: (old-k8s-version-399767) Ensuring network default is active
	I1014 15:01:52.204971   72639 main.go:141] libmachine: (old-k8s-version-399767) Ensuring network mk-old-k8s-version-399767 is active
	I1014 15:01:52.205326   72639 main.go:141] libmachine: (old-k8s-version-399767) Getting domain xml...
	I1014 15:01:52.206026   72639 main.go:141] libmachine: (old-k8s-version-399767) Creating domain...
	I1014 15:01:51.488446   72390 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 15:01:51.488486   72390 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 15:01:51.488535   72390 buildroot.go:174] setting up certificates
	I1014 15:01:51.488553   72390 provision.go:84] configureAuth start
	I1014 15:01:51.488570   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetMachineName
	I1014 15:01:51.488867   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetIP
	I1014 15:01:51.491749   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.492141   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.492171   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.492351   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.494197   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.494498   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.494524   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.494693   72390 provision.go:143] copyHostCerts
	I1014 15:01:51.494745   72390 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 15:01:51.494764   72390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 15:01:51.494834   72390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 15:01:51.494945   72390 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 15:01:51.494958   72390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 15:01:51.494992   72390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 15:01:51.495081   72390 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 15:01:51.495095   72390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 15:01:51.495122   72390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 15:01:51.495214   72390 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-201291 san=[127.0.0.1 192.168.50.128 default-k8s-diff-port-201291 localhost minikube]
	I1014 15:01:51.567041   72390 provision.go:177] copyRemoteCerts
	I1014 15:01:51.567098   72390 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 15:01:51.567121   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.570006   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.570340   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.570368   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.570562   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:51.570769   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.570941   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:51.571047   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:01:51.652956   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 15:01:51.677959   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1014 15:01:51.702009   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 15:01:51.727016   72390 provision.go:87] duration metric: took 238.449189ms to configureAuth
	I1014 15:01:51.727043   72390 buildroot.go:189] setting minikube options for container-runtime
	I1014 15:01:51.727207   72390 config.go:182] Loaded profile config "default-k8s-diff-port-201291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 15:01:51.727276   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.729742   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.730043   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.730065   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.730242   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:51.730418   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.730578   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.730735   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:51.730891   72390 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:51.731097   72390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I1014 15:01:51.731114   72390 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 15:01:51.942847   72390 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 15:01:51.942874   72390 machine.go:96] duration metric: took 813.575194ms to provisionDockerMachine
	I1014 15:01:51.942888   72390 start.go:293] postStartSetup for "default-k8s-diff-port-201291" (driver="kvm2")
	I1014 15:01:51.942903   72390 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 15:01:51.942926   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:51.943250   72390 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 15:01:51.943283   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:51.946246   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.946608   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:51.946638   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:51.946799   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:51.946984   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:51.947165   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:51.947293   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:01:52.030124   72390 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 15:01:52.034493   72390 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 15:01:52.034525   72390 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 15:01:52.034625   72390 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 15:01:52.034740   72390 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 15:01:52.034834   72390 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 15:01:52.044919   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:01:52.068326   72390 start.go:296] duration metric: took 125.426221ms for postStartSetup
	I1014 15:01:52.068370   72390 fix.go:56] duration metric: took 19.832650283s for fixHost
	I1014 15:01:52.068394   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:52.070949   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.071362   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:52.071388   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.071588   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:52.071788   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:52.071908   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:52.072065   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:52.072231   72390 main.go:141] libmachine: Using SSH client type: native
	I1014 15:01:52.072449   72390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.50.128 22 <nil> <nil>}
	I1014 15:01:52.072468   72390 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 15:01:52.179264   72390 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728918112.149610573
	
	I1014 15:01:52.179291   72390 fix.go:216] guest clock: 1728918112.149610573
	I1014 15:01:52.179301   72390 fix.go:229] Guest: 2024-10-14 15:01:52.149610573 +0000 UTC Remote: 2024-10-14 15:01:52.06837553 +0000 UTC m=+235.685992564 (delta=81.235043ms)
	I1014 15:01:52.179349   72390 fix.go:200] guest clock delta is within tolerance: 81.235043ms
	I1014 15:01:52.179354   72390 start.go:83] releasing machines lock for "default-k8s-diff-port-201291", held for 19.943664398s
	I1014 15:01:52.179387   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:52.179666   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetIP
	I1014 15:01:52.182457   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.182834   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:52.182861   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.183000   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:52.183598   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:52.183784   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:01:52.183883   72390 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 15:01:52.183928   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:52.183993   72390 ssh_runner.go:195] Run: cat /version.json
	I1014 15:01:52.184017   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:01:52.186499   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.186692   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.186890   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:52.186915   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.187021   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:52.187050   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:52.187086   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:52.187288   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:01:52.187331   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:52.187479   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:52.187485   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:01:52.187597   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:01:52.187688   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:01:52.187843   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:01:52.264102   72390 ssh_runner.go:195] Run: systemctl --version
	I1014 15:01:52.291233   72390 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 15:01:52.443318   72390 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 15:01:52.450321   72390 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 15:01:52.450400   72390 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 15:01:52.467949   72390 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 15:01:52.467975   72390 start.go:495] detecting cgroup driver to use...
	I1014 15:01:52.468039   72390 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 15:01:52.485758   72390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 15:01:52.500662   72390 docker.go:217] disabling cri-docker service (if available) ...
	I1014 15:01:52.500729   72390 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 15:01:52.520846   72390 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 15:01:52.535606   72390 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 15:01:52.671062   72390 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 15:01:52.845631   72390 docker.go:233] disabling docker service ...
	I1014 15:01:52.845694   72390 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 15:01:52.867403   72390 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 15:01:52.882344   72390 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 15:01:53.020570   72390 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 15:01:53.157941   72390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 15:01:53.174989   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 15:01:53.195729   72390 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 15:01:53.195799   72390 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.207613   72390 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 15:01:53.207671   72390 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.218838   72390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.231186   72390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.247521   72390 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 15:01:53.258128   72390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.269119   72390 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.287810   72390 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:01:53.298576   72390 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 15:01:53.308114   72390 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 15:01:53.308169   72390 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 15:01:53.322207   72390 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 15:01:53.332284   72390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:01:53.483702   72390 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 15:01:53.581260   72390 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 15:01:53.581341   72390 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 15:01:53.586042   72390 start.go:563] Will wait 60s for crictl version
	I1014 15:01:53.586105   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:01:53.589931   72390 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 15:01:53.634776   72390 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 15:01:53.634864   72390 ssh_runner.go:195] Run: crio --version
	I1014 15:01:53.664242   72390 ssh_runner.go:195] Run: crio --version
	I1014 15:01:53.698374   72390 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1014 15:01:50.933590   72173 pod_ready.go:103] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"False"
	I1014 15:01:52.935445   72173 pod_ready.go:103] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"False"
	I1014 15:01:53.699730   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetIP
	I1014 15:01:53.702837   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:53.703224   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:01:53.703245   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:01:53.703528   72390 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1014 15:01:53.707720   72390 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:01:53.721953   72390 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-201291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.1 ClusterName:default-k8s-diff-port-201291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.128 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 15:01:53.722106   72390 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 15:01:53.722165   72390 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:01:53.779083   72390 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1014 15:01:53.779139   72390 ssh_runner.go:195] Run: which lz4
	I1014 15:01:53.783197   72390 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 15:01:53.787515   72390 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 15:01:53.787549   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (388599353 bytes)
	I1014 15:01:55.277150   72390 crio.go:462] duration metric: took 1.493980352s to copy over tarball
	I1014 15:01:55.277212   72390 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 15:01:53.506315   72639 main.go:141] libmachine: (old-k8s-version-399767) Waiting to get IP...
	I1014 15:01:53.507576   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:53.508228   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:53.508297   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:53.508202   73581 retry.go:31] will retry after 220.59125ms: waiting for machine to come up
	I1014 15:01:53.730853   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:53.731286   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:53.731339   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:53.731257   73581 retry.go:31] will retry after 321.559387ms: waiting for machine to come up
	I1014 15:01:54.054891   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:54.055482   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:54.055509   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:54.055443   73581 retry.go:31] will retry after 444.912998ms: waiting for machine to come up
	I1014 15:01:54.502125   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:54.502479   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:54.502525   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:54.502462   73581 retry.go:31] will retry after 600.214254ms: waiting for machine to come up
	I1014 15:01:55.104962   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:55.105479   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:55.105504   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:55.105425   73581 retry.go:31] will retry after 686.77698ms: waiting for machine to come up
	I1014 15:01:55.794125   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:55.794825   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:55.794871   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:55.794717   73581 retry.go:31] will retry after 926.146146ms: waiting for machine to come up
	I1014 15:01:56.722712   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:56.723153   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:56.723183   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:56.723112   73581 retry.go:31] will retry after 1.108272037s: waiting for machine to come up
	I1014 15:01:57.832729   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:57.833304   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:57.833356   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:57.833279   73581 retry.go:31] will retry after 1.442737664s: waiting for machine to come up
	I1014 15:01:55.435691   72173 pod_ready.go:103] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"False"
	I1014 15:01:57.933561   72173 pod_ready.go:103] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"False"
	I1014 15:01:57.424526   72390 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.147277316s)
	I1014 15:01:57.424559   72390 crio.go:469] duration metric: took 2.147385522s to extract the tarball
	I1014 15:01:57.424566   72390 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 15:01:57.461792   72390 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:01:57.504424   72390 crio.go:514] all images are preloaded for cri-o runtime.
	I1014 15:01:57.504450   72390 cache_images.go:84] Images are preloaded, skipping loading
	I1014 15:01:57.504460   72390 kubeadm.go:934] updating node { 192.168.50.128 8444 v1.31.1 crio true true} ...
	I1014 15:01:57.504656   72390 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-201291 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.128
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-201291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 15:01:57.504759   72390 ssh_runner.go:195] Run: crio config
	I1014 15:01:57.555431   72390 cni.go:84] Creating CNI manager for ""
	I1014 15:01:57.555453   72390 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:01:57.555462   72390 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 15:01:57.555482   72390 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.128 APIServerPort:8444 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-201291 NodeName:default-k8s-diff-port-201291 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.128"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.128 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 15:01:57.555593   72390 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.128
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-201291"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.128"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.128"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 15:01:57.555652   72390 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 15:01:57.565953   72390 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 15:01:57.566025   72390 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 15:01:57.576141   72390 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1014 15:01:57.594855   72390 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 15:01:57.611249   72390 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1014 15:01:57.628363   72390 ssh_runner.go:195] Run: grep 192.168.50.128	control-plane.minikube.internal$ /etc/hosts
	I1014 15:01:57.632552   72390 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.128	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:01:57.645588   72390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:01:57.769192   72390 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:01:57.787654   72390 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291 for IP: 192.168.50.128
	I1014 15:01:57.787677   72390 certs.go:194] generating shared ca certs ...
	I1014 15:01:57.787695   72390 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:01:57.787865   72390 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 15:01:57.787916   72390 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 15:01:57.787930   72390 certs.go:256] generating profile certs ...
	I1014 15:01:57.788084   72390 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/client.key
	I1014 15:01:57.788174   72390 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/apiserver.key.517dfce8
	I1014 15:01:57.788223   72390 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/proxy-client.key
	I1014 15:01:57.788371   72390 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 15:01:57.788407   72390 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 15:01:57.788417   72390 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 15:01:57.788439   72390 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 15:01:57.788460   72390 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 15:01:57.788482   72390 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 15:01:57.788521   72390 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:01:57.789141   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 15:01:57.821159   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 15:01:57.875530   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 15:01:57.902687   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 15:01:57.935658   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1014 15:01:57.961987   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1014 15:01:57.987107   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 15:01:58.013544   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/default-k8s-diff-port-201291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 15:01:58.039793   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 15:01:58.071154   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 15:01:58.102574   72390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 15:01:58.127398   72390 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 15:01:58.144906   72390 ssh_runner.go:195] Run: openssl version
	I1014 15:01:58.150817   72390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 15:01:58.162122   72390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 15:01:58.167170   72390 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 15:01:58.167240   72390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 15:01:58.173692   72390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 15:01:58.185769   72390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 15:01:58.197045   72390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:01:58.201652   72390 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:01:58.201716   72390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:01:58.207559   72390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 15:01:58.218921   72390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 15:01:58.230822   72390 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 15:01:58.235774   72390 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 15:01:58.235832   72390 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 15:01:58.241546   72390 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 15:01:58.252618   72390 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 15:01:58.257509   72390 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 15:01:58.263891   72390 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 15:01:58.270085   72390 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 15:01:58.276427   72390 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 15:01:58.282346   72390 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 15:01:58.288396   72390 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 15:01:58.294386   72390 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-201291 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.1 ClusterName:default-k8s-diff-port-201291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.128 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 15:01:58.294472   72390 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 15:01:58.294517   72390 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:01:58.342008   72390 cri.go:89] found id: ""
	I1014 15:01:58.342088   72390 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 15:01:58.352478   72390 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1014 15:01:58.352512   72390 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1014 15:01:58.352566   72390 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 15:01:58.363158   72390 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 15:01:58.364106   72390 kubeconfig.go:125] found "default-k8s-diff-port-201291" server: "https://192.168.50.128:8444"
	I1014 15:01:58.366079   72390 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 15:01:58.375635   72390 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.128
	I1014 15:01:58.375666   72390 kubeadm.go:1160] stopping kube-system containers ...
	I1014 15:01:58.375680   72390 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1014 15:01:58.375733   72390 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:01:58.411846   72390 cri.go:89] found id: ""
	I1014 15:01:58.411923   72390 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1014 15:01:58.428602   72390 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:01:58.439214   72390 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:01:58.439239   72390 kubeadm.go:157] found existing configuration files:
	
	I1014 15:01:58.439293   72390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1014 15:01:58.448475   72390 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:01:58.448528   72390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:01:58.457816   72390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1014 15:01:58.467279   72390 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:01:58.467352   72390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:01:58.477479   72390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1014 15:01:58.487899   72390 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:01:58.487968   72390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:01:58.498296   72390 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1014 15:01:58.507910   72390 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:01:58.507977   72390 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:01:58.517901   72390 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:01:58.527983   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:58.654226   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:59.576099   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:59.790552   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:59.879043   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:01:59.963369   72390 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:01:59.963462   72390 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:00.464403   72390 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:00.963891   72390 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:00.994849   72390 api_server.go:72] duration metric: took 1.031477803s to wait for apiserver process to appear ...
	I1014 15:02:00.994875   72390 api_server.go:88] waiting for apiserver healthz status ...
	I1014 15:02:00.994897   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:01:59.278031   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:01:59.278558   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:01:59.278586   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:01:59.278519   73581 retry.go:31] will retry after 1.187069828s: waiting for machine to come up
	I1014 15:02:00.467810   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:00.468237   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:02:00.468267   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:02:00.468195   73581 retry.go:31] will retry after 1.667312665s: waiting for machine to come up
	I1014 15:02:02.137067   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:02.137569   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:02:02.137590   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:02:02.137530   73581 retry.go:31] will retry after 1.910892221s: waiting for machine to come up
	I1014 15:01:59.994818   72173 pod_ready.go:103] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:00.130085   72173 pod_ready.go:93] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:00.130109   72173 pod_ready.go:82] duration metric: took 13.202838085s for pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:00.130121   72173 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:02.142821   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:03.649728   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 15:02:03.649764   72390 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 15:02:03.649780   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:02:03.754772   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:03.754805   72390 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:02:03.995106   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:02:04.020015   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:04.020040   72390 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:02:04.495270   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:02:04.501643   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:04.501694   72390 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:02:04.995049   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:02:05.002865   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:05.002893   72390 api_server.go:103] status: https://192.168.50.128:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:02:05.495412   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:02:05.499936   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 200:
	ok
	I1014 15:02:05.506656   72390 api_server.go:141] control plane version: v1.31.1
	I1014 15:02:05.506685   72390 api_server.go:131] duration metric: took 4.511803211s to wait for apiserver health ...
	I1014 15:02:05.506694   72390 cni.go:84] Creating CNI manager for ""
	I1014 15:02:05.506700   72390 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:02:05.508420   72390 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 15:02:05.509685   72390 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 15:02:05.521314   72390 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 15:02:05.543021   72390 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 15:02:05.553508   72390 system_pods.go:59] 8 kube-system pods found
	I1014 15:02:05.553539   72390 system_pods.go:61] "coredns-7c65d6cfc9-994hx" [b0291ce4-5503-4bb1-8e36-d956b115c3ac] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 15:02:05.553548   72390 system_pods.go:61] "etcd-default-k8s-diff-port-201291" [5e359915-fb2e-46d5-a1a8-826341943fc3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 15:02:05.553555   72390 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-201291" [047bd813-aaab-428e-ab47-12932195c91f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 15:02:05.553562   72390 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-201291" [6eb0eb91-21ce-4e56-9758-fbd453b0d4df] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 15:02:05.553567   72390 system_pods.go:61] "kube-proxy-rh82t" [1dcd3c39-1bfe-40ac-a012-ea17ea1dfb6d] Running
	I1014 15:02:05.553572   72390 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-201291" [aaeefd23-6adc-4c69-acca-38e3f3172b2e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 15:02:05.553577   72390 system_pods.go:61] "metrics-server-6867b74b74-bcrqs" [508697cd-cf31-4078-8985-5c0b77966695] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:02:05.553581   72390 system_pods.go:61] "storage-provisioner" [62925b5e-ec1d-4d5b-aa70-a4fc555db52d] Running
	I1014 15:02:05.553587   72390 system_pods.go:74] duration metric: took 10.544168ms to wait for pod list to return data ...
	I1014 15:02:05.553593   72390 node_conditions.go:102] verifying NodePressure condition ...
	I1014 15:02:05.558889   72390 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 15:02:05.558917   72390 node_conditions.go:123] node cpu capacity is 2
	I1014 15:02:05.558929   72390 node_conditions.go:105] duration metric: took 5.331009ms to run NodePressure ...
	I1014 15:02:05.558948   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:05.819037   72390 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1014 15:02:05.826431   72390 kubeadm.go:739] kubelet initialised
	I1014 15:02:05.826456   72390 kubeadm.go:740] duration metric: took 7.391664ms waiting for restarted kubelet to initialise ...
	I1014 15:02:05.826463   72390 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:02:05.833547   72390 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:05.840150   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.840175   72390 pod_ready.go:82] duration metric: took 6.599969ms for pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:05.840186   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.840205   72390 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:05.850319   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.850346   72390 pod_ready.go:82] duration metric: took 10.130163ms for pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:05.850359   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.850368   72390 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:05.857192   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.857215   72390 pod_ready.go:82] duration metric: took 6.838793ms for pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:05.857228   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.857237   72390 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:05.946611   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.946646   72390 pod_ready.go:82] duration metric: took 89.397304ms for pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:05.946663   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:05.946674   72390 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-rh82t" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:06.346368   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "kube-proxy-rh82t" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:06.346400   72390 pod_ready.go:82] duration metric: took 399.71513ms for pod "kube-proxy-rh82t" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:06.346413   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "kube-proxy-rh82t" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:06.346423   72390 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:06.746899   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:06.746928   72390 pod_ready.go:82] duration metric: took 400.494872ms for pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:06.746941   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:06.746951   72390 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:07.146147   72390 pod_ready.go:98] node "default-k8s-diff-port-201291" hosting pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:07.146175   72390 pod_ready.go:82] duration metric: took 399.215075ms for pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace to be "Ready" ...
	E1014 15:02:07.146199   72390 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-201291" hosting pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:07.146215   72390 pod_ready.go:39] duration metric: took 1.319742206s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:02:07.146237   72390 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 15:02:07.158049   72390 ops.go:34] apiserver oom_adj: -16
	I1014 15:02:07.158072   72390 kubeadm.go:597] duration metric: took 8.805549392s to restartPrimaryControlPlane
	I1014 15:02:07.158082   72390 kubeadm.go:394] duration metric: took 8.863707122s to StartCluster
	I1014 15:02:07.158102   72390 settings.go:142] acquiring lock: {Name:mk9f6f6b9dc8c3435472077cbb9091b8e648d1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:02:07.158192   72390 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 15:02:07.159622   72390 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/kubeconfig: {Name:mk17dd47b52fd7a8ee17f563c35b22ef1b7788f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:02:07.159917   72390 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.128 Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 15:02:07.159968   72390 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 15:02:07.160052   72390 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-201291"
	I1014 15:02:07.160074   72390 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-201291"
	W1014 15:02:07.160086   72390 addons.go:243] addon storage-provisioner should already be in state true
	I1014 15:02:07.160125   72390 host.go:66] Checking if "default-k8s-diff-port-201291" exists ...
	I1014 15:02:07.160133   72390 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-201291"
	I1014 15:02:07.160166   72390 config.go:182] Loaded profile config "default-k8s-diff-port-201291": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 15:02:07.160181   72390 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-201291"
	I1014 15:02:07.160179   72390 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-201291"
	I1014 15:02:07.160228   72390 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-201291"
	W1014 15:02:07.160251   72390 addons.go:243] addon metrics-server should already be in state true
	I1014 15:02:07.160312   72390 host.go:66] Checking if "default-k8s-diff-port-201291" exists ...
	I1014 15:02:07.160472   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.160508   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.160692   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.160712   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.160729   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.160770   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.161892   72390 out.go:177] * Verifying Kubernetes components...
	I1014 15:02:07.163368   72390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:02:07.176101   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36801
	I1014 15:02:07.176351   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44737
	I1014 15:02:07.176705   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.176834   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.177272   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.177298   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.177392   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.177413   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.177600   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43091
	I1014 15:02:07.177639   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.177703   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.178070   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.178181   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.178244   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.178252   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.178285   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.178566   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.178590   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.178944   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.179107   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetState
	I1014 15:02:07.181971   72390 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-201291"
	W1014 15:02:07.181989   72390 addons.go:243] addon default-storageclass should already be in state true
	I1014 15:02:07.182024   72390 host.go:66] Checking if "default-k8s-diff-port-201291" exists ...
	I1014 15:02:07.182278   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.182322   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.194707   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36383
	I1014 15:02:07.195401   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.196015   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.196043   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.196413   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.196511   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35479
	I1014 15:02:07.196618   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetState
	I1014 15:02:07.196977   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.197479   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.197497   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.197520   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41695
	I1014 15:02:07.197848   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.197981   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.198048   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetState
	I1014 15:02:07.198544   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.198567   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.198636   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:02:07.199017   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.199817   72390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:07.199824   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:02:07.199864   72390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:07.200860   72390 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:07.201674   72390 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1014 15:02:04.050521   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:04.051060   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:02:04.051099   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:02:04.051015   73581 retry.go:31] will retry after 2.29433775s: waiting for machine to come up
	I1014 15:02:06.347519   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:06.347985   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | unable to find current IP address of domain old-k8s-version-399767 in network mk-old-k8s-version-399767
	I1014 15:02:06.348004   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | I1014 15:02:06.347945   73581 retry.go:31] will retry after 3.499922823s: waiting for machine to come up
	I1014 15:02:07.202461   72390 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 15:02:07.202476   72390 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 15:02:07.202491   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:02:07.203259   72390 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1014 15:02:07.203275   72390 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1014 15:02:07.203292   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:02:07.205760   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:02:07.206124   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:02:07.206150   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:02:07.206375   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:02:07.206533   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:02:07.206676   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:02:07.206729   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:02:07.206858   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:02:07.207134   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:02:07.207150   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:02:07.207248   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:02:07.207455   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:02:07.207559   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:02:07.207677   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:02:07.219554   72390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38833
	I1014 15:02:07.220070   72390 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:07.220483   72390 main.go:141] libmachine: Using API Version  1
	I1014 15:02:07.220508   72390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:07.220842   72390 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:07.221004   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetState
	I1014 15:02:07.222706   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .DriverName
	I1014 15:02:07.222961   72390 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 15:02:07.222979   72390 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 15:02:07.222997   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHHostname
	I1014 15:02:07.225715   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:02:07.226209   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:23:03:c4", ip: ""} in network mk-default-k8s-diff-port-201291: {Iface:virbr2 ExpiryTime:2024-10-14 16:01:44 +0000 UTC Type:0 Mac:52:54:00:23:03:c4 Iaid: IPaddr:192.168.50.128 Prefix:24 Hostname:default-k8s-diff-port-201291 Clientid:01:52:54:00:23:03:c4}
	I1014 15:02:07.226250   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | domain default-k8s-diff-port-201291 has defined IP address 192.168.50.128 and MAC address 52:54:00:23:03:c4 in network mk-default-k8s-diff-port-201291
	I1014 15:02:07.226551   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHPort
	I1014 15:02:07.226964   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHKeyPath
	I1014 15:02:07.227118   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .GetSSHUsername
	I1014 15:02:07.227254   72390 sshutil.go:53] new ssh client: &{IP:192.168.50.128 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/default-k8s-diff-port-201291/id_rsa Username:docker}
	I1014 15:02:07.362105   72390 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:02:07.384279   72390 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-201291" to be "Ready" ...
	I1014 15:02:07.438536   72390 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 15:02:07.551868   72390 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1014 15:02:07.551897   72390 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1014 15:02:07.606347   72390 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 15:02:07.656287   72390 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1014 15:02:07.656313   72390 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1014 15:02:07.687002   72390 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 15:02:07.687027   72390 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1014 15:02:07.751715   72390 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 15:02:07.810869   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:07.810902   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:07.811193   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Closing plugin on server side
	I1014 15:02:07.811247   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:07.811262   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:07.811273   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:07.811281   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:07.811546   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:07.811562   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:07.811576   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Closing plugin on server side
	I1014 15:02:07.819897   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:07.819917   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:07.820156   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:07.820206   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:07.820179   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Closing plugin on server side
	I1014 15:02:08.581553   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:08.581583   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:08.581902   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Closing plugin on server side
	I1014 15:02:08.581943   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:08.581955   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:08.581974   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:08.581986   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:08.582197   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:08.582211   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:08.595214   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:08.595242   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:08.595493   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) DBG | Closing plugin on server side
	I1014 15:02:08.595569   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:08.595589   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:08.595609   72390 main.go:141] libmachine: Making call to close driver server
	I1014 15:02:08.595623   72390 main.go:141] libmachine: (default-k8s-diff-port-201291) Calling .Close
	I1014 15:02:08.595833   72390 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:02:08.595847   72390 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:02:08.595864   72390 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-201291"
	I1014 15:02:08.597967   72390 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1014 15:02:04.638029   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:07.139428   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:11.248505   71679 start.go:364] duration metric: took 53.170862497s to acquireMachinesLock for "no-preload-813300"
	I1014 15:02:11.248567   71679 start.go:96] Skipping create...Using existing machine configuration
	I1014 15:02:11.248581   71679 fix.go:54] fixHost starting: 
	I1014 15:02:11.248978   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:02:11.249022   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:02:11.266270   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39251
	I1014 15:02:11.266780   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:02:11.267302   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:02:11.267319   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:02:11.267675   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:02:11.267842   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:11.267984   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetState
	I1014 15:02:11.269459   71679 fix.go:112] recreateIfNeeded on no-preload-813300: state=Stopped err=<nil>
	I1014 15:02:11.269484   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	W1014 15:02:11.269589   71679 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 15:02:11.271434   71679 out.go:177] * Restarting existing kvm2 VM for "no-preload-813300" ...
	I1014 15:02:08.599138   72390 addons.go:510] duration metric: took 1.439175047s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1014 15:02:09.388573   72390 node_ready.go:53] node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:09.851017   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.851562   72639 main.go:141] libmachine: (old-k8s-version-399767) Found IP for machine: 192.168.72.138
	I1014 15:02:09.851582   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has current primary IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.851587   72639 main.go:141] libmachine: (old-k8s-version-399767) Reserving static IP address...
	I1014 15:02:09.851961   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "old-k8s-version-399767", mac: "52:54:00:87:01:70", ip: "192.168.72.138"} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:09.851991   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | skip adding static IP to network mk-old-k8s-version-399767 - found existing host DHCP lease matching {name: "old-k8s-version-399767", mac: "52:54:00:87:01:70", ip: "192.168.72.138"}
	I1014 15:02:09.852009   72639 main.go:141] libmachine: (old-k8s-version-399767) Reserved static IP address: 192.168.72.138
	I1014 15:02:09.852021   72639 main.go:141] libmachine: (old-k8s-version-399767) Waiting for SSH to be available...
	I1014 15:02:09.852031   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | Getting to WaitForSSH function...
	I1014 15:02:09.854039   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.854351   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:09.854378   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.854493   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | Using SSH client type: external
	I1014 15:02:09.854517   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa (-rw-------)
	I1014 15:02:09.854547   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.138 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 15:02:09.854559   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | About to run SSH command:
	I1014 15:02:09.854572   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | exit 0
	I1014 15:02:09.979174   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | SSH cmd err, output: <nil>: 
	I1014 15:02:09.979594   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetConfigRaw
	I1014 15:02:09.980252   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetIP
	I1014 15:02:09.983038   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.983469   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:09.983502   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.983891   72639 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/config.json ...
	I1014 15:02:09.984191   72639 machine.go:93] provisionDockerMachine start ...
	I1014 15:02:09.984220   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:09.984487   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:09.986947   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.987361   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:09.987389   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:09.987514   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:09.987682   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:09.987830   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:09.987924   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:09.988076   72639 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:09.988338   72639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1014 15:02:09.988352   72639 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 15:02:10.098944   72639 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 15:02:10.098968   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetMachineName
	I1014 15:02:10.099242   72639 buildroot.go:166] provisioning hostname "old-k8s-version-399767"
	I1014 15:02:10.099268   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetMachineName
	I1014 15:02:10.099437   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:10.101961   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.102298   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.102320   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.102468   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:10.102670   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.102846   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.102980   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:10.103124   72639 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:10.103337   72639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1014 15:02:10.103353   72639 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-399767 && echo "old-k8s-version-399767" | sudo tee /etc/hostname
	I1014 15:02:10.226037   72639 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-399767
	
	I1014 15:02:10.226069   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:10.228712   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.229059   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.229082   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.229228   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:10.229408   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.229549   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.229670   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:10.229804   72639 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:10.230001   72639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1014 15:02:10.230018   72639 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-399767' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-399767/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-399767' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 15:02:10.344175   72639 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 15:02:10.344206   72639 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 15:02:10.344270   72639 buildroot.go:174] setting up certificates
	I1014 15:02:10.344284   72639 provision.go:84] configureAuth start
	I1014 15:02:10.344302   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetMachineName
	I1014 15:02:10.344632   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetIP
	I1014 15:02:10.347200   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.347587   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.347623   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.347812   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:10.349962   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.350332   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.350364   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.350502   72639 provision.go:143] copyHostCerts
	I1014 15:02:10.350558   72639 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 15:02:10.350574   72639 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 15:02:10.350646   72639 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 15:02:10.350734   72639 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 15:02:10.350742   72639 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 15:02:10.350762   72639 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 15:02:10.350812   72639 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 15:02:10.350819   72639 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 15:02:10.350837   72639 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 15:02:10.350887   72639 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-399767 san=[127.0.0.1 192.168.72.138 localhost minikube old-k8s-version-399767]
	I1014 15:02:10.602118   72639 provision.go:177] copyRemoteCerts
	I1014 15:02:10.602175   72639 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 15:02:10.602199   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:10.604519   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.604744   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.604776   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.604946   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:10.605127   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.605273   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:10.605403   72639 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa Username:docker}
	I1014 15:02:10.689081   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 15:02:10.713512   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1014 15:02:10.738086   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 15:02:10.762274   72639 provision.go:87] duration metric: took 417.977128ms to configureAuth
	I1014 15:02:10.762307   72639 buildroot.go:189] setting minikube options for container-runtime
	I1014 15:02:10.762486   72639 config.go:182] Loaded profile config "old-k8s-version-399767": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1014 15:02:10.762552   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:10.765134   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.765442   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:10.765469   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:10.765600   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:10.765756   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.765903   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:10.765998   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:10.766131   72639 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:10.766297   72639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1014 15:02:10.766311   72639 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 15:02:11.011252   72639 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 15:02:11.011279   72639 machine.go:96] duration metric: took 1.027069423s to provisionDockerMachine
	I1014 15:02:11.011292   72639 start.go:293] postStartSetup for "old-k8s-version-399767" (driver="kvm2")
	I1014 15:02:11.011304   72639 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 15:02:11.011349   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:11.011716   72639 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 15:02:11.011751   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:11.014418   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.014754   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:11.014790   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.014946   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:11.015125   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:11.015260   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:11.015376   72639 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa Username:docker}
	I1014 15:02:11.097883   72639 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 15:02:11.102452   72639 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 15:02:11.102481   72639 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 15:02:11.102551   72639 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 15:02:11.102687   72639 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 15:02:11.102781   72639 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 15:02:11.112774   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:02:11.138211   72639 start.go:296] duration metric: took 126.906035ms for postStartSetup
	I1014 15:02:11.138247   72639 fix.go:56] duration metric: took 18.958741429s for fixHost
	I1014 15:02:11.138270   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:11.140740   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.141100   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:11.141139   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.141280   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:11.141484   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:11.141668   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:11.141811   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:11.141974   72639 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:11.142131   72639 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.72.138 22 <nil> <nil>}
	I1014 15:02:11.142141   72639 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 15:02:11.248330   72639 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728918131.224010283
	
	I1014 15:02:11.248355   72639 fix.go:216] guest clock: 1728918131.224010283
	I1014 15:02:11.248373   72639 fix.go:229] Guest: 2024-10-14 15:02:11.224010283 +0000 UTC Remote: 2024-10-14 15:02:11.138252894 +0000 UTC m=+233.173555624 (delta=85.757389ms)
	I1014 15:02:11.248399   72639 fix.go:200] guest clock delta is within tolerance: 85.757389ms
	I1014 15:02:11.248406   72639 start.go:83] releasing machines lock for "old-k8s-version-399767", held for 19.068928968s
	I1014 15:02:11.248434   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:11.248692   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetIP
	I1014 15:02:11.251774   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.252134   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:11.252176   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.252358   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:11.252840   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:11.253017   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .DriverName
	I1014 15:02:11.253104   72639 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 15:02:11.253150   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:11.253232   72639 ssh_runner.go:195] Run: cat /version.json
	I1014 15:02:11.253259   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHHostname
	I1014 15:02:11.256105   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.256339   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.256504   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:11.256529   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.256662   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:11.256732   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:11.256771   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:11.256844   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:11.256932   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHPort
	I1014 15:02:11.257003   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:11.257141   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHKeyPath
	I1014 15:02:11.257131   72639 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa Username:docker}
	I1014 15:02:11.257296   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetSSHUsername
	I1014 15:02:11.257414   72639 sshutil.go:53] new ssh client: &{IP:192.168.72.138 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/old-k8s-version-399767/id_rsa Username:docker}
	I1014 15:02:11.363838   72639 ssh_runner.go:195] Run: systemctl --version
	I1014 15:02:11.370414   72639 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 15:02:11.521232   72639 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 15:02:11.527623   72639 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 15:02:11.527712   72639 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 15:02:11.544532   72639 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 15:02:11.544559   72639 start.go:495] detecting cgroup driver to use...
	I1014 15:02:11.544614   72639 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 15:02:11.561693   72639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 15:02:11.576555   72639 docker.go:217] disabling cri-docker service (if available) ...
	I1014 15:02:11.576622   72639 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 15:02:11.593830   72639 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 15:02:11.608785   72639 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 15:02:11.731034   72639 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 15:02:11.909278   72639 docker.go:233] disabling docker service ...
	I1014 15:02:11.909359   72639 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 15:02:11.931218   72639 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 15:02:11.951710   72639 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 15:02:12.103012   72639 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 15:02:12.252290   72639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 15:02:12.270497   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 15:02:12.293240   72639 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1014 15:02:12.293297   72639 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:12.304881   72639 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 15:02:12.304958   72639 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:12.316294   72639 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:12.328591   72639 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:12.340085   72639 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 15:02:12.351765   72639 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 15:02:12.362454   72639 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 15:02:12.362525   72639 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 15:02:12.376865   72639 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 15:02:12.387779   72639 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:02:12.528541   72639 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 15:02:12.635262   72639 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 15:02:12.635335   72639 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 15:02:12.641070   72639 start.go:563] Will wait 60s for crictl version
	I1014 15:02:12.641121   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:12.645111   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 15:02:12.691103   72639 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 15:02:12.691199   72639 ssh_runner.go:195] Run: crio --version
	I1014 15:02:12.720182   72639 ssh_runner.go:195] Run: crio --version
	I1014 15:02:12.754856   72639 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1014 15:02:12.756005   72639 main.go:141] libmachine: (old-k8s-version-399767) Calling .GetIP
	I1014 15:02:12.759369   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:12.759890   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:01:70", ip: ""} in network mk-old-k8s-version-399767: {Iface:virbr4 ExpiryTime:2024-10-14 16:02:04 +0000 UTC Type:0 Mac:52:54:00:87:01:70 Iaid: IPaddr:192.168.72.138 Prefix:24 Hostname:old-k8s-version-399767 Clientid:01:52:54:00:87:01:70}
	I1014 15:02:12.759924   72639 main.go:141] libmachine: (old-k8s-version-399767) DBG | domain old-k8s-version-399767 has defined IP address 192.168.72.138 and MAC address 52:54:00:87:01:70 in network mk-old-k8s-version-399767
	I1014 15:02:12.760164   72639 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1014 15:02:12.765342   72639 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:02:12.782182   72639 kubeadm.go:883] updating cluster {Name:old-k8s-version-399767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-399767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.138 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 15:02:12.782307   72639 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1014 15:02:12.782374   72639 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:02:12.841797   72639 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1014 15:02:12.841871   72639 ssh_runner.go:195] Run: which lz4
	I1014 15:02:12.846193   72639 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1014 15:02:12.850982   72639 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1014 15:02:12.851019   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1014 15:02:09.636366   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:11.637804   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:13.638684   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:11.272626   71679 main.go:141] libmachine: (no-preload-813300) Calling .Start
	I1014 15:02:11.272827   71679 main.go:141] libmachine: (no-preload-813300) Ensuring networks are active...
	I1014 15:02:11.273510   71679 main.go:141] libmachine: (no-preload-813300) Ensuring network default is active
	I1014 15:02:11.273954   71679 main.go:141] libmachine: (no-preload-813300) Ensuring network mk-no-preload-813300 is active
	I1014 15:02:11.274410   71679 main.go:141] libmachine: (no-preload-813300) Getting domain xml...
	I1014 15:02:11.275263   71679 main.go:141] libmachine: (no-preload-813300) Creating domain...
	I1014 15:02:12.614590   71679 main.go:141] libmachine: (no-preload-813300) Waiting to get IP...
	I1014 15:02:12.615572   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:12.616018   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:12.616092   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:12.616013   73776 retry.go:31] will retry after 302.312986ms: waiting for machine to come up
	I1014 15:02:12.919678   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:12.920039   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:12.920074   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:12.920005   73776 retry.go:31] will retry after 371.392955ms: waiting for machine to come up
	I1014 15:02:13.292596   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:13.293214   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:13.293244   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:13.293164   73776 retry.go:31] will retry after 299.379251ms: waiting for machine to come up
	I1014 15:02:13.594808   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:13.595344   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:13.595370   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:13.595297   73776 retry.go:31] will retry after 598.480386ms: waiting for machine to come up
	I1014 15:02:14.195149   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:14.195744   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:14.195775   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:14.195696   73776 retry.go:31] will retry after 567.581822ms: waiting for machine to come up
	I1014 15:02:14.764315   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:14.764863   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:14.764886   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:14.764815   73776 retry.go:31] will retry after 587.597591ms: waiting for machine to come up
	I1014 15:02:15.353495   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:15.353948   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:15.353980   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:15.353896   73776 retry.go:31] will retry after 1.024496536s: waiting for machine to come up
	I1014 15:02:11.889135   72390 node_ready.go:53] node "default-k8s-diff-port-201291" has status "Ready":"False"
	I1014 15:02:13.889200   72390 node_ready.go:49] node "default-k8s-diff-port-201291" has status "Ready":"True"
	I1014 15:02:13.889228   72390 node_ready.go:38] duration metric: took 6.504919545s for node "default-k8s-diff-port-201291" to be "Ready" ...
	I1014 15:02:13.889240   72390 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:02:13.898112   72390 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:15.907127   72390 pod_ready.go:103] pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:14.579304   72639 crio.go:462] duration metric: took 1.733147869s to copy over tarball
	I1014 15:02:14.579405   72639 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1014 15:02:17.644891   72639 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.06545265s)
	I1014 15:02:17.644954   72639 crio.go:469] duration metric: took 3.065620277s to extract the tarball
	I1014 15:02:17.644979   72639 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1014 15:02:17.688304   72639 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:02:17.727862   72639 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1014 15:02:17.727888   72639 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1014 15:02:17.727984   72639 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:17.727995   72639 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:17.728006   72639 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:17.728036   72639 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:17.727986   72639 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:17.728104   72639 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1014 15:02:17.728169   72639 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1014 15:02:17.728267   72639 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:17.729900   72639 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:17.729941   72639 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:17.729954   72639 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:17.729900   72639 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1014 15:02:17.729984   72639 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:17.729999   72639 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1014 15:02:17.729913   72639 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:17.730335   72639 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:17.889181   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:17.912728   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:17.919124   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:17.920117   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:17.934314   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1014 15:02:17.951143   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:17.956588   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1014 15:02:17.964968   72639 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1014 15:02:17.965031   72639 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:17.965066   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:16.139535   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:18.637888   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:16.379768   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:16.380165   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:16.380236   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:16.380142   73776 retry.go:31] will retry after 1.022289492s: waiting for machine to come up
	I1014 15:02:17.403892   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:17.404406   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:17.404430   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:17.404383   73776 retry.go:31] will retry after 1.277226075s: waiting for machine to come up
	I1014 15:02:18.683704   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:18.684176   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:18.684200   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:18.684126   73776 retry.go:31] will retry after 2.146714263s: waiting for machine to come up
	I1014 15:02:18.406707   72390 pod_ready.go:103] pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:20.412201   72390 pod_ready.go:103] pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:21.406229   72390 pod_ready.go:93] pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:21.406256   72390 pod_ready.go:82] duration metric: took 7.508120497s for pod "coredns-7c65d6cfc9-994hx" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.406269   72390 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.413868   72390 pod_ready.go:93] pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:21.413896   72390 pod_ready.go:82] duration metric: took 7.618897ms for pod "etcd-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.413910   72390 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:18.041388   72639 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1014 15:02:18.041436   72639 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:18.041489   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.041504   72639 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1014 15:02:18.041540   72639 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:18.041579   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.069534   72639 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1014 15:02:18.069582   72639 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1014 15:02:18.069631   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.069794   72639 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1014 15:02:18.069821   72639 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:18.069852   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.096492   72639 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1014 15:02:18.096536   72639 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:18.096575   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.104764   72639 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1014 15:02:18.104810   72639 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1014 15:02:18.104816   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:18.104854   72639 ssh_runner.go:195] Run: which crictl
	I1014 15:02:18.104876   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:18.104885   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:18.104980   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:18.104984   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1014 15:02:18.105025   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:18.119784   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1014 15:02:18.213816   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:18.241644   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:18.288717   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:18.288820   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1014 15:02:18.288931   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:18.289005   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:18.295481   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1014 15:02:18.376936   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1014 15:02:18.393755   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1014 15:02:18.449717   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1014 15:02:18.449798   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1014 15:02:18.449824   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1014 15:02:18.449904   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1014 15:02:18.461905   72639 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1014 15:02:18.508804   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1014 15:02:18.521502   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1014 15:02:18.612103   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1014 15:02:18.613450   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1014 15:02:18.613548   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1014 15:02:18.613625   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1014 15:02:18.613715   72639 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1014 15:02:18.741774   72639 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:18.888495   72639 cache_images.go:92] duration metric: took 1.16058525s to LoadCachedImages
	W1014 15:02:18.888578   72639 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0: no such file or directory
	I1014 15:02:18.888594   72639 kubeadm.go:934] updating node { 192.168.72.138 8443 v1.20.0 crio true true} ...
	I1014 15:02:18.888707   72639 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-399767 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.138
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-399767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 15:02:18.888791   72639 ssh_runner.go:195] Run: crio config
	I1014 15:02:18.943058   72639 cni.go:84] Creating CNI manager for ""
	I1014 15:02:18.943082   72639 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:02:18.943091   72639 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 15:02:18.943108   72639 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.138 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-399767 NodeName:old-k8s-version-399767 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.138"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.138 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1014 15:02:18.943225   72639 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.138
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-399767"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.138
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.138"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 15:02:18.943285   72639 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1014 15:02:18.956635   72639 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 15:02:18.956727   72639 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 15:02:18.970846   72639 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1014 15:02:18.992163   72639 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 15:02:19.012061   72639 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1014 15:02:19.033158   72639 ssh_runner.go:195] Run: grep 192.168.72.138	control-plane.minikube.internal$ /etc/hosts
	I1014 15:02:19.037195   72639 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.138	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:02:19.051127   72639 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:02:19.172992   72639 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:02:19.190545   72639 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767 for IP: 192.168.72.138
	I1014 15:02:19.190572   72639 certs.go:194] generating shared ca certs ...
	I1014 15:02:19.190592   72639 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:02:19.190786   72639 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 15:02:19.190843   72639 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 15:02:19.190853   72639 certs.go:256] generating profile certs ...
	I1014 15:02:19.190973   72639 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/client.key
	I1014 15:02:19.191053   72639 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/apiserver.key.c5ef93ea
	I1014 15:02:19.191108   72639 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/proxy-client.key
	I1014 15:02:19.191264   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 15:02:19.191302   72639 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 15:02:19.191314   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 15:02:19.191345   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 15:02:19.191374   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 15:02:19.191423   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 15:02:19.191477   72639 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:02:19.192328   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 15:02:19.248981   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 15:02:19.281262   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 15:02:19.312859   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 15:02:19.351940   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1014 15:02:19.405710   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 15:02:19.441313   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 15:02:19.481774   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/old-k8s-version-399767/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 15:02:19.509433   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 15:02:19.537994   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 15:02:19.564460   72639 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 15:02:19.593632   72639 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 15:02:19.614775   72639 ssh_runner.go:195] Run: openssl version
	I1014 15:02:19.623548   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 15:02:19.636680   72639 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:02:19.642225   72639 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:02:19.642286   72639 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:02:19.648609   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 15:02:19.661130   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 15:02:19.672988   72639 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 15:02:19.678119   72639 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 15:02:19.678189   72639 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 15:02:19.684583   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 15:02:19.696685   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 15:02:19.708338   72639 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 15:02:19.713443   72639 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 15:02:19.713502   72639 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 15:02:19.719482   72639 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 15:02:19.731720   72639 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 15:02:19.739006   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 15:02:19.747558   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 15:02:19.756399   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 15:02:19.764987   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 15:02:19.773320   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 15:02:19.781239   72639 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 15:02:19.788638   72639 kubeadm.go:392] StartCluster: {Name:old-k8s-version-399767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-399767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.138 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 15:02:19.788753   72639 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 15:02:19.788810   72639 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:02:19.829586   72639 cri.go:89] found id: ""
	I1014 15:02:19.829641   72639 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 15:02:19.844632   72639 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1014 15:02:19.844654   72639 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1014 15:02:19.844708   72639 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 15:02:19.860547   72639 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 15:02:19.861848   72639 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-399767" does not appear in /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 15:02:19.862755   72639 kubeconfig.go:62] /home/jenkins/minikube-integration/19790-7836/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-399767" cluster setting kubeconfig missing "old-k8s-version-399767" context setting]
	I1014 15:02:19.863757   72639 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/kubeconfig: {Name:mk17dd47b52fd7a8ee17f563c35b22ef1b7788f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:02:19.927447   72639 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 15:02:19.940830   72639 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.138
	I1014 15:02:19.940919   72639 kubeadm.go:1160] stopping kube-system containers ...
	I1014 15:02:19.940947   72639 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1014 15:02:19.941009   72639 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:02:19.983689   72639 cri.go:89] found id: ""
	I1014 15:02:19.983769   72639 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1014 15:02:20.007079   72639 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:02:20.023868   72639 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:02:20.023896   72639 kubeadm.go:157] found existing configuration files:
	
	I1014 15:02:20.023971   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:02:20.038661   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:02:20.038734   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:02:20.054357   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:02:20.068771   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:02:20.068843   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:02:20.081157   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:02:20.095416   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:02:20.095483   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:02:20.109099   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:02:20.120608   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:02:20.120680   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:02:20.133217   72639 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:02:20.145896   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:20.311840   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:21.472918   72639 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.161037865s)
	I1014 15:02:21.472953   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:21.739827   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:21.833423   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:21.931874   72639 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:02:21.931987   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:22.432595   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:22.932784   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:21.138446   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:23.636836   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:20.833532   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:20.833974   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:20.834000   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:20.833930   73776 retry.go:31] will retry after 1.936414638s: waiting for machine to come up
	I1014 15:02:22.771789   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:22.772183   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:22.772206   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:22.772148   73776 retry.go:31] will retry after 2.51581517s: waiting for machine to come up
	I1014 15:02:25.290082   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:25.290491   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:25.290518   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:25.290453   73776 retry.go:31] will retry after 3.279920525s: waiting for machine to come up
	I1014 15:02:21.420355   72390 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:21.420385   72390 pod_ready.go:82] duration metric: took 6.465669ms for pod "kube-apiserver-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.420398   72390 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.427723   72390 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:21.427747   72390 pod_ready.go:82] duration metric: took 7.340946ms for pod "kube-controller-manager-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.427760   72390 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rh82t" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.433500   72390 pod_ready.go:93] pod "kube-proxy-rh82t" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:21.433526   72390 pod_ready.go:82] duration metric: took 5.757064ms for pod "kube-proxy-rh82t" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.433543   72390 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.802632   72390 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace has status "Ready":"True"
	I1014 15:02:21.802660   72390 pod_ready.go:82] duration metric: took 369.107697ms for pod "kube-scheduler-default-k8s-diff-port-201291" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:21.802672   72390 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:23.811046   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:26.308105   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:23.432728   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:23.932296   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:24.432079   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:24.932064   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:25.432201   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:25.932119   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:26.432423   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:26.932675   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:27.432633   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:27.932380   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:25.637287   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:28.137136   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:28.572901   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:28.573383   71679 main.go:141] libmachine: (no-preload-813300) DBG | unable to find current IP address of domain no-preload-813300 in network mk-no-preload-813300
	I1014 15:02:28.573421   71679 main.go:141] libmachine: (no-preload-813300) DBG | I1014 15:02:28.573304   73776 retry.go:31] will retry after 5.283390724s: waiting for machine to come up
	I1014 15:02:28.310800   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:30.400310   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:28.432518   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:28.932871   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:29.432350   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:29.932761   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:30.432621   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:30.932873   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:31.432716   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:31.932364   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:32.432747   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:32.933039   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:30.637300   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:33.136858   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:33.858151   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.858626   71679 main.go:141] libmachine: (no-preload-813300) Found IP for machine: 192.168.61.13
	I1014 15:02:33.858660   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has current primary IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.858670   71679 main.go:141] libmachine: (no-preload-813300) Reserving static IP address...
	I1014 15:02:33.859001   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "no-preload-813300", mac: "52:54:00:ab:86:40", ip: "192.168.61.13"} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:33.859022   71679 main.go:141] libmachine: (no-preload-813300) Reserved static IP address: 192.168.61.13
	I1014 15:02:33.859040   71679 main.go:141] libmachine: (no-preload-813300) DBG | skip adding static IP to network mk-no-preload-813300 - found existing host DHCP lease matching {name: "no-preload-813300", mac: "52:54:00:ab:86:40", ip: "192.168.61.13"}
	I1014 15:02:33.859055   71679 main.go:141] libmachine: (no-preload-813300) DBG | Getting to WaitForSSH function...
	I1014 15:02:33.859065   71679 main.go:141] libmachine: (no-preload-813300) Waiting for SSH to be available...
	I1014 15:02:33.860949   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.861245   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:33.861287   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.861398   71679 main.go:141] libmachine: (no-preload-813300) DBG | Using SSH client type: external
	I1014 15:02:33.861424   71679 main.go:141] libmachine: (no-preload-813300) DBG | Using SSH private key: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa (-rw-------)
	I1014 15:02:33.861460   71679 main.go:141] libmachine: (no-preload-813300) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.13 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1014 15:02:33.861476   71679 main.go:141] libmachine: (no-preload-813300) DBG | About to run SSH command:
	I1014 15:02:33.861488   71679 main.go:141] libmachine: (no-preload-813300) DBG | exit 0
	I1014 15:02:33.991450   71679 main.go:141] libmachine: (no-preload-813300) DBG | SSH cmd err, output: <nil>: 
	I1014 15:02:33.991854   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetConfigRaw
	I1014 15:02:33.992623   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetIP
	I1014 15:02:33.995514   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.995884   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:33.995908   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.996225   71679 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/config.json ...
	I1014 15:02:33.996549   71679 machine.go:93] provisionDockerMachine start ...
	I1014 15:02:33.996572   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:33.996784   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:33.999385   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.999751   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:33.999789   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:33.999948   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.000135   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.000312   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.000455   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.000648   71679 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:34.000874   71679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.13 22 <nil> <nil>}
	I1014 15:02:34.000890   71679 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 15:02:34.114981   71679 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1014 15:02:34.115014   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetMachineName
	I1014 15:02:34.115245   71679 buildroot.go:166] provisioning hostname "no-preload-813300"
	I1014 15:02:34.115272   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetMachineName
	I1014 15:02:34.115421   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.117557   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.117890   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.117929   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.118027   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.118210   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.118365   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.118524   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.118720   71679 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:34.118913   71679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.13 22 <nil> <nil>}
	I1014 15:02:34.118932   71679 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-813300 && echo "no-preload-813300" | sudo tee /etc/hostname
	I1014 15:02:34.246092   71679 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-813300
	
	I1014 15:02:34.246149   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.248672   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.249095   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.249122   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.249331   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.249505   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.249687   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.249860   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.250061   71679 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:34.250272   71679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.13 22 <nil> <nil>}
	I1014 15:02:34.250297   71679 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-813300' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-813300/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-813300' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 15:02:34.373470   71679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 15:02:34.373512   71679 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19790-7836/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-7836/.minikube}
	I1014 15:02:34.373576   71679 buildroot.go:174] setting up certificates
	I1014 15:02:34.373594   71679 provision.go:84] configureAuth start
	I1014 15:02:34.373613   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetMachineName
	I1014 15:02:34.373903   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetIP
	I1014 15:02:34.376697   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.376986   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.377009   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.377137   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.379469   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.379813   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.379838   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.379981   71679 provision.go:143] copyHostCerts
	I1014 15:02:34.380034   71679 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem, removing ...
	I1014 15:02:34.380050   71679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem
	I1014 15:02:34.380106   71679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/ca.pem (1078 bytes)
	I1014 15:02:34.380194   71679 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem, removing ...
	I1014 15:02:34.380201   71679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem
	I1014 15:02:34.380223   71679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/cert.pem (1123 bytes)
	I1014 15:02:34.380282   71679 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem, removing ...
	I1014 15:02:34.380288   71679 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem
	I1014 15:02:34.380305   71679 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-7836/.minikube/key.pem (1679 bytes)
	I1014 15:02:34.380362   71679 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem org=jenkins.no-preload-813300 san=[127.0.0.1 192.168.61.13 localhost minikube no-preload-813300]
	I1014 15:02:34.421281   71679 provision.go:177] copyRemoteCerts
	I1014 15:02:34.421331   71679 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 15:02:34.421353   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.423903   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.424219   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.424248   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.424471   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.424665   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.424807   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.424948   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:02:34.512847   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1014 15:02:34.539814   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1014 15:02:34.568946   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 15:02:34.593444   71679 provision.go:87] duration metric: took 219.83393ms to configureAuth
	I1014 15:02:34.593467   71679 buildroot.go:189] setting minikube options for container-runtime
	I1014 15:02:34.593661   71679 config.go:182] Loaded profile config "no-preload-813300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 15:02:34.593744   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.596317   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.596626   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.596659   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.596819   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.597008   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.597159   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.597295   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.597433   71679 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:34.597611   71679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.13 22 <nil> <nil>}
	I1014 15:02:34.597631   71679 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1014 15:02:34.837224   71679 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1014 15:02:34.837244   71679 machine.go:96] duration metric: took 840.680679ms to provisionDockerMachine
	I1014 15:02:34.837256   71679 start.go:293] postStartSetup for "no-preload-813300" (driver="kvm2")
	I1014 15:02:34.837265   71679 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 15:02:34.837281   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:34.837593   71679 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 15:02:34.837625   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.840357   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.840677   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.840702   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.840845   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.841025   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.841193   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.841363   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:02:34.930754   71679 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 15:02:34.935428   71679 info.go:137] Remote host: Buildroot 2023.02.9
	I1014 15:02:34.935457   71679 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/addons for local assets ...
	I1014 15:02:34.935541   71679 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-7836/.minikube/files for local assets ...
	I1014 15:02:34.935659   71679 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem -> 150232.pem in /etc/ssl/certs
	I1014 15:02:34.935795   71679 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 15:02:34.946363   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:02:34.973029   71679 start.go:296] duration metric: took 135.76066ms for postStartSetup
	I1014 15:02:34.973074   71679 fix.go:56] duration metric: took 23.72449375s for fixHost
	I1014 15:02:34.973098   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:34.975897   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.976211   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:34.976237   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:34.976487   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:34.976687   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.976813   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:34.976923   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:34.977075   71679 main.go:141] libmachine: Using SSH client type: native
	I1014 15:02:34.977294   71679 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 192.168.61.13 22 <nil> <nil>}
	I1014 15:02:34.977309   71679 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1014 15:02:35.091556   71679 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728918155.078304162
	
	I1014 15:02:35.091581   71679 fix.go:216] guest clock: 1728918155.078304162
	I1014 15:02:35.091590   71679 fix.go:229] Guest: 2024-10-14 15:02:35.078304162 +0000 UTC Remote: 2024-10-14 15:02:34.973079478 +0000 UTC m=+359.485826316 (delta=105.224684ms)
	I1014 15:02:35.091610   71679 fix.go:200] guest clock delta is within tolerance: 105.224684ms
	I1014 15:02:35.091616   71679 start.go:83] releasing machines lock for "no-preload-813300", held for 23.843071366s
	I1014 15:02:35.091641   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:35.091899   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetIP
	I1014 15:02:35.094383   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:35.094712   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:35.094733   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:35.094910   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:35.095353   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:35.095534   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:02:35.095589   71679 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 15:02:35.095658   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:35.095750   71679 ssh_runner.go:195] Run: cat /version.json
	I1014 15:02:35.095773   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:02:35.098288   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:35.098316   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:35.098680   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:35.098713   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:35.098743   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:35.098795   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:35.098835   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:35.099003   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:35.099186   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:35.099198   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:02:35.099367   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:02:35.099371   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:02:35.099513   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:02:35.099728   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:02:35.179961   71679 ssh_runner.go:195] Run: systemctl --version
	I1014 15:02:35.205523   71679 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1014 15:02:35.350662   71679 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1014 15:02:35.356870   71679 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1014 15:02:35.356941   71679 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 15:02:35.374967   71679 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1014 15:02:35.374997   71679 start.go:495] detecting cgroup driver to use...
	I1014 15:02:35.375067   71679 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1014 15:02:35.393194   71679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1014 15:02:35.408295   71679 docker.go:217] disabling cri-docker service (if available) ...
	I1014 15:02:35.408362   71679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 15:02:35.423927   71679 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 15:02:35.438753   71679 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 15:02:32.809221   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:34.811962   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:35.567539   71679 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 15:02:35.702830   71679 docker.go:233] disabling docker service ...
	I1014 15:02:35.702916   71679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 15:02:35.720822   71679 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 15:02:35.735403   71679 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 15:02:35.880532   71679 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 15:02:36.003343   71679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 15:02:36.018230   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 15:02:36.037065   71679 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1014 15:02:36.037134   71679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.047820   71679 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1014 15:02:36.047880   71679 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.058531   71679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.069760   71679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.081047   71679 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 15:02:36.092384   71679 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.103241   71679 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.121771   71679 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1014 15:02:36.132886   71679 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 15:02:36.143239   71679 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 15:02:36.143308   71679 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 15:02:36.156582   71679 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 15:02:36.165955   71679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:02:36.283857   71679 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1014 15:02:36.388165   71679 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1014 15:02:36.388243   71679 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1014 15:02:36.393324   71679 start.go:563] Will wait 60s for crictl version
	I1014 15:02:36.393378   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:36.397236   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 15:02:36.444749   71679 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1014 15:02:36.444839   71679 ssh_runner.go:195] Run: crio --version
	I1014 15:02:36.474831   71679 ssh_runner.go:195] Run: crio --version
	I1014 15:02:36.520531   71679 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.29.1 ...
	I1014 15:02:33.432474   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:33.932719   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:34.432581   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:34.932863   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:35.432886   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:35.932915   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:36.432852   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:36.932367   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:37.432894   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:37.933035   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:35.637235   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:38.137613   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:36.521865   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetIP
	I1014 15:02:36.524566   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:36.524956   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:02:36.524984   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:02:36.525213   71679 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1014 15:02:36.529579   71679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:02:36.542554   71679 kubeadm.go:883] updating cluster {Name:no-preload-813300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.1 ClusterName:no-preload-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.13 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 15:02:36.542701   71679 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 15:02:36.542737   71679 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 15:02:36.585681   71679 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.1". assuming images are not preloaded.
	I1014 15:02:36.585719   71679 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.1 registry.k8s.io/kube-controller-manager:v1.31.1 registry.k8s.io/kube-scheduler:v1.31.1 registry.k8s.io/kube-proxy:v1.31.1 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1014 15:02:36.585806   71679 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:36.585838   71679 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:36.585865   71679 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:36.585886   71679 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1014 15:02:36.585925   71679 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:36.585814   71679 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:36.585954   71679 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:36.585843   71679 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:36.587263   71679 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:36.587290   71679 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:36.587289   71679 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:36.587289   71679 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:36.587289   71679 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:36.587326   71679 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:36.587289   71679 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:36.587274   71679 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1014 15:02:36.737070   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:36.750146   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:36.750401   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:36.767605   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1014 15:02:36.775005   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:36.797223   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:36.833657   71679 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.1" does not exist at hash "175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1" in container runtime
	I1014 15:02:36.833708   71679 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:36.833754   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:36.833875   71679 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.1" does not exist at hash "9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b" in container runtime
	I1014 15:02:36.833896   71679 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:36.833929   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:36.850009   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:36.911675   71679 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1014 15:02:36.911720   71679 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:36.911779   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:36.973319   71679 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1014 15:02:36.973354   71679 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:36.973383   71679 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.1" does not exist at hash "6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee" in container runtime
	I1014 15:02:36.973394   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:36.973414   71679 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:36.973453   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:36.973456   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:36.973519   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:36.973619   71679 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.1" needs transfer: "registry.k8s.io/kube-proxy:v1.31.1" does not exist at hash "60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561" in container runtime
	I1014 15:02:36.973640   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:36.973644   71679 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:36.973671   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:37.044689   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:37.044739   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:37.044815   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:37.044860   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:37.044907   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:37.044947   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:37.166670   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:37.166737   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:37.166794   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.1
	I1014 15:02:37.166908   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1014 15:02:37.166924   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.1
	I1014 15:02:37.272802   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:37.272835   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1014 15:02:37.287078   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1
	I1014 15:02:37.287167   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.1
	I1014 15:02:37.287207   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1014 15:02:37.287240   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1014 15:02:37.287293   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1014 15:02:37.287320   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1014 15:02:37.287367   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1014 15:02:37.354510   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1
	I1014 15:02:37.354621   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1014 15:02:37.354659   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.1 (exists)
	I1014 15:02:37.354676   71679 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1014 15:02:37.354700   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1014 15:02:37.354711   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1014 15:02:37.354719   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1
	I1014 15:02:37.354790   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1
	I1014 15:02:37.354812   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.1 (exists)
	I1014 15:02:37.354865   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1
	I1014 15:02:37.532403   71679 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:39.443614   71679 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.1: (2.089069189s)
	I1014 15:02:39.443676   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1
	I1014 15:02:39.443766   71679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.1: (2.089027703s)
	I1014 15:02:39.443790   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.1 from cache
	I1014 15:02:39.443775   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1014 15:02:39.443813   71679 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1014 15:02:39.443833   71679 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0: (2.089105476s)
	I1014 15:02:39.443854   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1014 15:02:39.443861   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1014 15:02:39.443911   71679 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.1: (2.089031069s)
	I1014 15:02:39.443933   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.1 (exists)
	I1014 15:02:39.443986   71679 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.911557292s)
	I1014 15:02:39.444029   71679 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1014 15:02:39.444057   71679 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:39.444111   71679 ssh_runner.go:195] Run: which crictl
	I1014 15:02:37.309522   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:39.809526   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:38.432551   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:38.932486   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:39.432591   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:39.932694   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:40.432065   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:40.932044   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:41.432313   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:41.933055   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:42.432453   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:42.932258   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:40.137656   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:42.637462   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:41.514958   71679 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.1: (2.071133048s)
	I1014 15:02:41.514987   71679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (2.071109487s)
	I1014 15:02:41.515016   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1014 15:02:41.515041   71679 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1014 15:02:41.515046   71679 ssh_runner.go:235] Completed: which crictl: (2.070916553s)
	I1014 15:02:41.514994   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.1 (exists)
	I1014 15:02:41.515093   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1
	I1014 15:02:41.515105   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:41.569878   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:43.401013   71679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.1: (1.885889648s)
	I1014 15:02:43.401053   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.1 from cache
	I1014 15:02:43.401068   71679 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.831164682s)
	I1014 15:02:43.401082   71679 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1014 15:02:43.401131   71679 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:02:43.401139   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1014 15:02:41.809862   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:43.810054   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:45.810567   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:43.432054   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:43.932139   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:44.432261   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:44.932517   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:45.432959   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:45.933103   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:46.432845   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:46.932825   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:47.432059   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:47.932745   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:44.639020   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:47.136927   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:49.137423   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:46.799144   71679 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.397987929s)
	I1014 15:02:46.799198   71679 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1014 15:02:46.799201   71679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.398044957s)
	I1014 15:02:46.799222   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1014 15:02:46.799249   71679 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.1
	I1014 15:02:46.799295   71679 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1014 15:02:46.799296   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1
	I1014 15:02:46.804398   71679 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1014 15:02:48.971377   71679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.1: (2.171989764s)
	I1014 15:02:48.971409   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.1 from cache
	I1014 15:02:48.971436   71679 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1014 15:02:48.971481   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1
	I1014 15:02:48.309980   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:50.311361   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:48.432869   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:48.932514   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:49.432754   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:49.932514   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:50.432199   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:50.932861   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:51.432404   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:51.932097   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:52.432569   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:52.933078   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:51.141481   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:53.638306   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:50.935341   71679 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.1: (1.963834471s)
	I1014 15:02:50.935373   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.1 from cache
	I1014 15:02:50.935401   71679 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1014 15:02:50.935452   71679 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1014 15:02:51.683211   71679 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/19790-7836/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1014 15:02:51.683268   71679 cache_images.go:123] Successfully loaded all cached images
	I1014 15:02:51.683277   71679 cache_images.go:92] duration metric: took 15.097525447s to LoadCachedImages
	I1014 15:02:51.683293   71679 kubeadm.go:934] updating node { 192.168.61.13 8443 v1.31.1 crio true true} ...
	I1014 15:02:51.683441   71679 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-813300 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 15:02:51.683525   71679 ssh_runner.go:195] Run: crio config
	I1014 15:02:51.737769   71679 cni.go:84] Creating CNI manager for ""
	I1014 15:02:51.737790   71679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:02:51.737799   71679 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 15:02:51.737818   71679 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.13 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-813300 NodeName:no-preload-813300 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 15:02:51.737955   71679 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-813300"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.13"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.13"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 15:02:51.738019   71679 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 15:02:51.749175   71679 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 15:02:51.749241   71679 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 15:02:51.759120   71679 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1014 15:02:51.777293   71679 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 15:02:51.795073   71679 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I1014 15:02:51.815094   71679 ssh_runner.go:195] Run: grep 192.168.61.13	control-plane.minikube.internal$ /etc/hosts
	I1014 15:02:51.819087   71679 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 15:02:51.831806   71679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:02:51.953191   71679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:02:51.972342   71679 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300 for IP: 192.168.61.13
	I1014 15:02:51.972362   71679 certs.go:194] generating shared ca certs ...
	I1014 15:02:51.972379   71679 certs.go:226] acquiring lock for ca certs: {Name:mk2b4353509830c2548b75e97370e30f48bc133d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:02:51.972534   71679 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key
	I1014 15:02:51.972583   71679 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key
	I1014 15:02:51.972597   71679 certs.go:256] generating profile certs ...
	I1014 15:02:51.972732   71679 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/client.key
	I1014 15:02:51.972822   71679 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/apiserver.key.4d535e2d
	I1014 15:02:51.972885   71679 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/proxy-client.key
	I1014 15:02:51.973064   71679 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem (1338 bytes)
	W1014 15:02:51.973102   71679 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023_empty.pem, impossibly tiny 0 bytes
	I1014 15:02:51.973111   71679 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca-key.pem (1679 bytes)
	I1014 15:02:51.973151   71679 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/ca.pem (1078 bytes)
	I1014 15:02:51.973180   71679 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/cert.pem (1123 bytes)
	I1014 15:02:51.973203   71679 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/certs/key.pem (1679 bytes)
	I1014 15:02:51.973260   71679 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem (1708 bytes)
	I1014 15:02:51.974077   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 15:02:52.019451   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 15:02:52.048323   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 15:02:52.086241   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1014 15:02:52.129342   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 15:02:52.157243   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 15:02:52.189093   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 15:02:52.214980   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/no-preload-813300/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 15:02:52.241595   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/certs/15023.pem --> /usr/share/ca-certificates/15023.pem (1338 bytes)
	I1014 15:02:52.270329   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/ssl/certs/150232.pem --> /usr/share/ca-certificates/150232.pem (1708 bytes)
	I1014 15:02:52.295153   71679 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-7836/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 15:02:52.321303   71679 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 15:02:52.339181   71679 ssh_runner.go:195] Run: openssl version
	I1014 15:02:52.345152   71679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 15:02:52.357167   71679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:02:52.362387   71679 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:02:52.362442   71679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 15:02:52.369003   71679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 15:02:52.380917   71679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15023.pem && ln -fs /usr/share/ca-certificates/15023.pem /etc/ssl/certs/15023.pem"
	I1014 15:02:52.392884   71679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15023.pem
	I1014 15:02:52.397876   71679 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:50 /usr/share/ca-certificates/15023.pem
	I1014 15:02:52.397942   71679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15023.pem
	I1014 15:02:52.404038   71679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/15023.pem /etc/ssl/certs/51391683.0"
	I1014 15:02:52.415841   71679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/150232.pem && ln -fs /usr/share/ca-certificates/150232.pem /etc/ssl/certs/150232.pem"
	I1014 15:02:52.426973   71679 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/150232.pem
	I1014 15:02:52.431848   71679 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:50 /usr/share/ca-certificates/150232.pem
	I1014 15:02:52.431914   71679 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/150232.pem
	I1014 15:02:52.439851   71679 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/150232.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 15:02:52.455014   71679 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 15:02:52.460088   71679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 15:02:52.466495   71679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 15:02:52.472659   71679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 15:02:52.483107   71679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 15:02:52.491272   71679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 15:02:52.497692   71679 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 15:02:52.504352   71679 kubeadm.go:392] StartCluster: {Name:no-preload-813300 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.1 ClusterName:no-preload-813300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.13 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 15:02:52.504456   71679 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1014 15:02:52.504502   71679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:02:52.544010   71679 cri.go:89] found id: ""
	I1014 15:02:52.544074   71679 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 15:02:52.554296   71679 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1014 15:02:52.554314   71679 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1014 15:02:52.554364   71679 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 15:02:52.564193   71679 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 15:02:52.565367   71679 kubeconfig.go:125] found "no-preload-813300" server: "https://192.168.61.13:8443"
	I1014 15:02:52.567519   71679 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 15:02:52.577268   71679 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.13
	I1014 15:02:52.577296   71679 kubeadm.go:1160] stopping kube-system containers ...
	I1014 15:02:52.577305   71679 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1014 15:02:52.577343   71679 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 15:02:52.614462   71679 cri.go:89] found id: ""
	I1014 15:02:52.614551   71679 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1014 15:02:52.631835   71679 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:02:52.642314   71679 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:02:52.642334   71679 kubeadm.go:157] found existing configuration files:
	
	I1014 15:02:52.642378   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:02:52.652036   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:02:52.652114   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:02:52.662263   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:02:52.672145   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:02:52.672214   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:02:52.682085   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:02:52.691628   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:02:52.691706   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:02:52.701314   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:02:52.711232   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:02:52.711291   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:02:52.722480   71679 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:02:52.733359   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:52.849407   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:53.647528   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:53.863718   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:53.938091   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:02:54.046445   71679 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:02:54.046544   71679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:54.546715   71679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:55.047285   71679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:55.062239   71679 api_server.go:72] duration metric: took 1.015804644s to wait for apiserver process to appear ...
	I1014 15:02:55.062265   71679 api_server.go:88] waiting for apiserver healthz status ...
	I1014 15:02:55.062296   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:02:55.062806   71679 api_server.go:269] stopped: https://192.168.61.13:8443/healthz: Get "https://192.168.61.13:8443/healthz": dial tcp 192.168.61.13:8443: connect: connection refused
	I1014 15:02:52.811186   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:55.309901   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:53.432335   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:53.932860   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:54.433105   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:54.933031   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:55.432058   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:55.932422   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:56.432618   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:56.932727   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:57.432265   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:57.932733   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:56.136357   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:58.136956   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:55.562748   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:02:58.274557   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 15:02:58.274587   71679 api_server.go:103] status: https://192.168.61.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 15:02:58.274625   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:02:58.296655   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1014 15:02:58.296682   71679 api_server.go:103] status: https://192.168.61.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1014 15:02:58.563094   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:02:58.567676   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:58.567717   71679 api_server.go:103] status: https://192.168.61.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:02:59.063266   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:02:59.067656   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:59.067697   71679 api_server.go:103] status: https://192.168.61.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:02:59.563300   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:02:59.569667   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1014 15:02:59.569699   71679 api_server.go:103] status: https://192.168.61.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1014 15:03:00.063305   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:03:00.067834   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 200:
	ok
	I1014 15:03:00.079522   71679 api_server.go:141] control plane version: v1.31.1
	I1014 15:03:00.079555   71679 api_server.go:131] duration metric: took 5.017283463s to wait for apiserver health ...
	I1014 15:03:00.079565   71679 cni.go:84] Creating CNI manager for ""
	I1014 15:03:00.079572   71679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:03:00.081793   71679 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 15:03:00.083132   71679 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 15:03:00.095329   71679 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 15:03:00.114972   71679 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 15:03:00.148816   71679 system_pods.go:59] 8 kube-system pods found
	I1014 15:03:00.148849   71679 system_pods.go:61] "coredns-7c65d6cfc9-5cft7" [43bb92da-74e8-4430-a889-3c23ed3fef67] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1014 15:03:00.148859   71679 system_pods.go:61] "etcd-no-preload-813300" [c3e9137c-855e-49e2-8891-8df57707f75a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1014 15:03:00.148867   71679 system_pods.go:61] "kube-apiserver-no-preload-813300" [683c2d48-6c84-470c-96e5-0706a1884ee7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1014 15:03:00.148872   71679 system_pods.go:61] "kube-controller-manager-no-preload-813300" [405991ef-9b48-4770-ba31-a213f0eae077] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1014 15:03:00.148882   71679 system_pods.go:61] "kube-proxy-jd4t4" [6c5c517b-855e-440c-976e-9c5e5d0710f6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1014 15:03:00.148887   71679 system_pods.go:61] "kube-scheduler-no-preload-813300" [e76569e6-74c8-44dd-b283-a82072226686] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1014 15:03:00.148892   71679 system_pods.go:61] "metrics-server-6867b74b74-br4tl" [5b3425c6-9847-447d-a9ab-076c7cc1634f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:03:00.148896   71679 system_pods.go:61] "storage-provisioner" [2c52e790-afa9-4131-8e28-801eb3f822d5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1014 15:03:00.148906   71679 system_pods.go:74] duration metric: took 33.908487ms to wait for pod list to return data ...
	I1014 15:03:00.148918   71679 node_conditions.go:102] verifying NodePressure condition ...
	I1014 15:03:00.161000   71679 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 15:03:00.161029   71679 node_conditions.go:123] node cpu capacity is 2
	I1014 15:03:00.161042   71679 node_conditions.go:105] duration metric: took 12.118841ms to run NodePressure ...
	I1014 15:03:00.161067   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1014 15:03:00.510702   71679 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1014 15:03:00.515692   71679 kubeadm.go:739] kubelet initialised
	I1014 15:03:00.515715   71679 kubeadm.go:740] duration metric: took 4.986873ms waiting for restarted kubelet to initialise ...
	I1014 15:03:00.515724   71679 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:03:00.521483   71679 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-5cft7" in "kube-system" namespace to be "Ready" ...
	I1014 15:02:57.810518   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:59.811287   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:02:58.432774   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:58.932666   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:59.433020   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:02:59.932671   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:00.432717   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:00.932917   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:01.432735   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:01.932668   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:02.432260   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:02.932075   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:00.137257   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:02.137876   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:02.528402   71679 pod_ready.go:103] pod "coredns-7c65d6cfc9-5cft7" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:04.530210   71679 pod_ready.go:93] pod "coredns-7c65d6cfc9-5cft7" in "kube-system" namespace has status "Ready":"True"
	I1014 15:03:04.530241   71679 pod_ready.go:82] duration metric: took 4.008725187s for pod "coredns-7c65d6cfc9-5cft7" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:04.530254   71679 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:02.309134   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:04.311421   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:03.432139   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:03.932241   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:04.432421   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:04.932869   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:05.432972   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:05.933010   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:06.432409   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:06.932778   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:07.432067   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:07.932749   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:04.636760   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:07.136410   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:09.137483   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:06.537318   71679 pod_ready.go:103] pod "etcd-no-preload-813300" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:09.037462   71679 pod_ready.go:103] pod "etcd-no-preload-813300" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:06.810244   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:08.810932   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:10.813334   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:08.432529   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:08.932034   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:09.432042   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:09.933054   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:10.432938   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:10.932661   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:11.432392   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:11.932068   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:12.432066   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:12.932122   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:11.636654   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:13.637819   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:10.536905   71679 pod_ready.go:93] pod "etcd-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:03:10.536932   71679 pod_ready.go:82] duration metric: took 6.006669219s for pod "etcd-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:10.536945   71679 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:12.551283   71679 pod_ready.go:103] pod "kube-apiserver-no-preload-813300" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:13.044142   71679 pod_ready.go:93] pod "kube-apiserver-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:03:13.044166   71679 pod_ready.go:82] duration metric: took 2.507213726s for pod "kube-apiserver-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.044176   71679 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.049176   71679 pod_ready.go:93] pod "kube-controller-manager-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:03:13.049196   71679 pod_ready.go:82] duration metric: took 5.01377ms for pod "kube-controller-manager-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.049206   71679 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-jd4t4" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.053623   71679 pod_ready.go:93] pod "kube-proxy-jd4t4" in "kube-system" namespace has status "Ready":"True"
	I1014 15:03:13.053646   71679 pod_ready.go:82] duration metric: took 4.434586ms for pod "kube-proxy-jd4t4" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.053654   71679 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.559610   71679 pod_ready.go:93] pod "kube-scheduler-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:03:13.559632   71679 pod_ready.go:82] duration metric: took 505.972722ms for pod "kube-scheduler-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.559642   71679 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace to be "Ready" ...
	I1014 15:03:13.309520   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:15.309622   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:13.432556   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:13.932427   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:14.432053   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:14.932460   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:15.432714   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:15.933071   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:16.432567   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:16.932414   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:17.432985   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:17.932960   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:16.136599   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:18.137964   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:15.566234   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:17.567065   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:20.066221   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:17.309837   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:19.310194   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:18.433026   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:18.932015   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:19.432042   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:19.932030   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:20.433050   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:20.932658   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:21.432667   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:21.933045   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:21.933127   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:21.973476   72639 cri.go:89] found id: ""
	I1014 15:03:21.973507   72639 logs.go:282] 0 containers: []
	W1014 15:03:21.973517   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:21.973523   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:21.973584   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:22.011700   72639 cri.go:89] found id: ""
	I1014 15:03:22.011732   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.011742   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:22.011748   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:22.011814   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:22.047721   72639 cri.go:89] found id: ""
	I1014 15:03:22.047744   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.047752   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:22.047762   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:22.047814   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:22.091618   72639 cri.go:89] found id: ""
	I1014 15:03:22.091644   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.091652   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:22.091657   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:22.091706   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:22.129997   72639 cri.go:89] found id: ""
	I1014 15:03:22.130036   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.130047   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:22.130055   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:22.130114   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:22.168024   72639 cri.go:89] found id: ""
	I1014 15:03:22.168053   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.168061   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:22.168067   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:22.168136   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:22.202633   72639 cri.go:89] found id: ""
	I1014 15:03:22.202660   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.202670   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:22.202677   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:22.202739   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:22.238224   72639 cri.go:89] found id: ""
	I1014 15:03:22.238251   72639 logs.go:282] 0 containers: []
	W1014 15:03:22.238259   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:22.238267   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:22.238278   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:22.251940   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:22.251991   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:22.379777   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:22.379799   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:22.379814   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:22.456468   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:22.456507   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:22.495404   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:22.495433   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:20.636995   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:22.637141   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:22.066371   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:24.566023   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:21.809579   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:24.309010   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:25.048061   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:25.068586   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:25.068658   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:25.121199   72639 cri.go:89] found id: ""
	I1014 15:03:25.121228   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.121237   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:25.121243   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:25.121303   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:25.174705   72639 cri.go:89] found id: ""
	I1014 15:03:25.174738   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.174749   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:25.174757   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:25.174815   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:25.236972   72639 cri.go:89] found id: ""
	I1014 15:03:25.237002   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.237013   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:25.237020   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:25.237077   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:25.276443   72639 cri.go:89] found id: ""
	I1014 15:03:25.276473   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.276483   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:25.276489   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:25.276541   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:25.314573   72639 cri.go:89] found id: ""
	I1014 15:03:25.314623   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.314636   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:25.314645   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:25.314708   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:25.357489   72639 cri.go:89] found id: ""
	I1014 15:03:25.357515   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.357525   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:25.357533   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:25.357595   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:25.397504   72639 cri.go:89] found id: ""
	I1014 15:03:25.397527   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.397538   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:25.397546   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:25.397597   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:25.433139   72639 cri.go:89] found id: ""
	I1014 15:03:25.433162   72639 logs.go:282] 0 containers: []
	W1014 15:03:25.433170   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:25.433179   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:25.433193   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:25.448088   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:25.448121   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:25.522377   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:25.522401   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:25.522415   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:25.595505   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:25.595538   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:25.643478   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:25.643511   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:25.137557   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:27.637096   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:27.067425   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:29.565568   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:26.809419   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:29.309193   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:31.310234   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:28.195236   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:28.208612   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:28.208686   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:28.248538   72639 cri.go:89] found id: ""
	I1014 15:03:28.248569   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.248581   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:28.248588   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:28.248652   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:28.286103   72639 cri.go:89] found id: ""
	I1014 15:03:28.286131   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.286143   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:28.286149   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:28.286209   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:28.321335   72639 cri.go:89] found id: ""
	I1014 15:03:28.321371   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.321383   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:28.321391   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:28.321453   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:28.358538   72639 cri.go:89] found id: ""
	I1014 15:03:28.358571   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.358581   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:28.358588   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:28.358661   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:28.397058   72639 cri.go:89] found id: ""
	I1014 15:03:28.397087   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.397099   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:28.397106   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:28.397175   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:28.434010   72639 cri.go:89] found id: ""
	I1014 15:03:28.434032   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.434040   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:28.434045   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:28.434095   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:28.474646   72639 cri.go:89] found id: ""
	I1014 15:03:28.474672   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.474681   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:28.474687   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:28.474736   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:28.512833   72639 cri.go:89] found id: ""
	I1014 15:03:28.512860   72639 logs.go:282] 0 containers: []
	W1014 15:03:28.512871   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:28.512882   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:28.512894   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:28.526233   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:28.526262   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:28.601366   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:28.601393   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:28.601416   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:28.690261   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:28.690300   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:28.734134   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:28.734158   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:31.290184   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:31.303493   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:31.303558   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:31.341521   72639 cri.go:89] found id: ""
	I1014 15:03:31.341552   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.341563   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:31.341569   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:31.341627   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:31.378811   72639 cri.go:89] found id: ""
	I1014 15:03:31.378839   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.378851   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:31.378859   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:31.378922   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:31.416282   72639 cri.go:89] found id: ""
	I1014 15:03:31.416310   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.416321   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:31.416328   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:31.416392   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:31.456089   72639 cri.go:89] found id: ""
	I1014 15:03:31.456123   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.456134   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:31.456142   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:31.456202   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:31.496429   72639 cri.go:89] found id: ""
	I1014 15:03:31.496468   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.496478   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:31.496485   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:31.496548   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:31.535226   72639 cri.go:89] found id: ""
	I1014 15:03:31.535248   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.535256   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:31.535262   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:31.535321   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:31.572580   72639 cri.go:89] found id: ""
	I1014 15:03:31.572608   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.572623   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:31.572631   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:31.572691   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:31.606736   72639 cri.go:89] found id: ""
	I1014 15:03:31.606759   72639 logs.go:282] 0 containers: []
	W1014 15:03:31.606766   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:31.606774   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:31.606785   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:31.646048   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:31.646078   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:31.696818   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:31.696851   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:31.710099   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:31.710128   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:31.787756   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:31.787783   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:31.787798   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:30.136436   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:32.138037   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:34.139660   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:31.566034   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:33.567029   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:33.809434   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:36.309487   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:34.369392   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:34.383263   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:34.383344   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:34.417763   72639 cri.go:89] found id: ""
	I1014 15:03:34.417797   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.417809   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:34.417816   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:34.417890   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:34.453361   72639 cri.go:89] found id: ""
	I1014 15:03:34.453391   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.453402   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:34.453409   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:34.453488   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:34.490878   72639 cri.go:89] found id: ""
	I1014 15:03:34.490905   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.490913   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:34.490919   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:34.490980   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:34.527554   72639 cri.go:89] found id: ""
	I1014 15:03:34.527584   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.527595   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:34.527603   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:34.527655   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:34.564813   72639 cri.go:89] found id: ""
	I1014 15:03:34.564841   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.564851   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:34.564857   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:34.564903   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:34.599899   72639 cri.go:89] found id: ""
	I1014 15:03:34.599930   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.599942   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:34.599949   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:34.600019   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:34.641686   72639 cri.go:89] found id: ""
	I1014 15:03:34.641717   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.641728   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:34.641735   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:34.641794   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:34.681154   72639 cri.go:89] found id: ""
	I1014 15:03:34.681184   72639 logs.go:282] 0 containers: []
	W1014 15:03:34.681195   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:34.681205   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:34.681218   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:34.719638   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:34.719672   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:34.771687   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:34.771722   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:34.785943   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:34.785972   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:34.861821   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:34.861861   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:34.861875   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:37.441605   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:37.456763   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:37.456828   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:37.494176   72639 cri.go:89] found id: ""
	I1014 15:03:37.494202   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.494210   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:37.494216   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:37.494268   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:37.538802   72639 cri.go:89] found id: ""
	I1014 15:03:37.538834   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.538846   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:37.538853   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:37.538913   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:37.586282   72639 cri.go:89] found id: ""
	I1014 15:03:37.586312   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.586322   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:37.586328   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:37.586397   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:37.632673   72639 cri.go:89] found id: ""
	I1014 15:03:37.632698   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.632709   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:37.632715   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:37.632771   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:37.673340   72639 cri.go:89] found id: ""
	I1014 15:03:37.673364   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.673372   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:37.673377   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:37.673427   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:37.718725   72639 cri.go:89] found id: ""
	I1014 15:03:37.718750   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.718758   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:37.718764   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:37.718807   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:37.760560   72639 cri.go:89] found id: ""
	I1014 15:03:37.760587   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.760597   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:37.760605   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:37.760665   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:37.800912   72639 cri.go:89] found id: ""
	I1014 15:03:37.800941   72639 logs.go:282] 0 containers: []
	W1014 15:03:37.800949   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:37.800957   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:37.800968   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:37.815338   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:37.815363   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:37.893018   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:37.893050   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:37.893067   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:37.978315   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:37.978349   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:36.637635   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:39.136295   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:36.065915   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:38.066310   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:38.810020   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:40.810460   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:38.019760   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:38.019788   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:40.570918   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:40.586058   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:40.586122   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:40.623753   72639 cri.go:89] found id: ""
	I1014 15:03:40.623784   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.623795   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:40.623801   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:40.623862   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:40.663909   72639 cri.go:89] found id: ""
	I1014 15:03:40.663937   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.663946   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:40.663953   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:40.664008   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:40.698572   72639 cri.go:89] found id: ""
	I1014 15:03:40.698615   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.698626   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:40.698633   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:40.698683   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:40.734882   72639 cri.go:89] found id: ""
	I1014 15:03:40.734907   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.734914   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:40.734920   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:40.734976   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:40.768429   72639 cri.go:89] found id: ""
	I1014 15:03:40.768455   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.768462   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:40.768468   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:40.768527   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:40.803429   72639 cri.go:89] found id: ""
	I1014 15:03:40.803456   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.803466   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:40.803474   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:40.803535   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:40.842854   72639 cri.go:89] found id: ""
	I1014 15:03:40.842883   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.842905   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:40.842913   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:40.842988   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:40.879638   72639 cri.go:89] found id: ""
	I1014 15:03:40.879661   72639 logs.go:282] 0 containers: []
	W1014 15:03:40.879669   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:40.879677   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:40.879687   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:40.924949   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:40.924983   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:40.976271   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:40.976304   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:40.991492   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:40.991520   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:41.071418   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:41.071439   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:41.071453   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:41.136877   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:43.637356   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:40.566353   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:43.065982   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:45.066405   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:43.310188   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:45.811549   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:43.652387   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:43.666239   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:43.666317   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:43.705726   72639 cri.go:89] found id: ""
	I1014 15:03:43.705752   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.705761   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:43.705766   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:43.705814   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:43.745648   72639 cri.go:89] found id: ""
	I1014 15:03:43.745672   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.745680   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:43.745685   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:43.745731   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:43.783032   72639 cri.go:89] found id: ""
	I1014 15:03:43.783055   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.783063   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:43.783068   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:43.783115   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:43.820582   72639 cri.go:89] found id: ""
	I1014 15:03:43.820607   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.820617   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:43.820623   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:43.820669   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:43.862312   72639 cri.go:89] found id: ""
	I1014 15:03:43.862338   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.862348   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:43.862353   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:43.862404   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:43.898338   72639 cri.go:89] found id: ""
	I1014 15:03:43.898368   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.898379   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:43.898388   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:43.898448   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:43.934682   72639 cri.go:89] found id: ""
	I1014 15:03:43.934709   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.934719   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:43.934726   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:43.934781   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:43.970209   72639 cri.go:89] found id: ""
	I1014 15:03:43.970237   72639 logs.go:282] 0 containers: []
	W1014 15:03:43.970247   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:43.970257   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:43.970269   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:44.024791   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:44.024832   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:44.038431   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:44.038457   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:44.117255   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:44.117291   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:44.117308   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:44.199397   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:44.199436   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:46.739819   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:46.755553   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:46.755625   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:46.797225   72639 cri.go:89] found id: ""
	I1014 15:03:46.797253   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.797265   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:46.797272   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:46.797335   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:46.832999   72639 cri.go:89] found id: ""
	I1014 15:03:46.833025   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.833036   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:46.833043   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:46.833103   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:46.872711   72639 cri.go:89] found id: ""
	I1014 15:03:46.872733   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.872741   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:46.872746   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:46.872795   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:46.909945   72639 cri.go:89] found id: ""
	I1014 15:03:46.909968   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.909977   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:46.909985   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:46.910046   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:46.946036   72639 cri.go:89] found id: ""
	I1014 15:03:46.946067   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.946080   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:46.946087   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:46.946141   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:46.981772   72639 cri.go:89] found id: ""
	I1014 15:03:46.981806   72639 logs.go:282] 0 containers: []
	W1014 15:03:46.981819   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:46.981828   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:46.981896   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:47.022761   72639 cri.go:89] found id: ""
	I1014 15:03:47.022790   72639 logs.go:282] 0 containers: []
	W1014 15:03:47.022800   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:47.022807   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:47.022869   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:47.057368   72639 cri.go:89] found id: ""
	I1014 15:03:47.057392   72639 logs.go:282] 0 containers: []
	W1014 15:03:47.057400   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:47.057408   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:47.057418   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:47.134369   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:47.134408   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:47.179550   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:47.179586   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:47.233317   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:47.233355   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:47.247598   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:47.247629   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:47.321309   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:45.637760   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:48.136826   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:47.067003   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:49.565410   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:48.309520   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:50.812241   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:49.821955   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:49.836907   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:49.836975   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:49.876651   72639 cri.go:89] found id: ""
	I1014 15:03:49.876682   72639 logs.go:282] 0 containers: []
	W1014 15:03:49.876694   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:49.876713   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:49.876781   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:49.913440   72639 cri.go:89] found id: ""
	I1014 15:03:49.913464   72639 logs.go:282] 0 containers: []
	W1014 15:03:49.913473   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:49.913479   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:49.913535   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:49.949352   72639 cri.go:89] found id: ""
	I1014 15:03:49.949383   72639 logs.go:282] 0 containers: []
	W1014 15:03:49.949395   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:49.949402   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:49.949463   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:49.984599   72639 cri.go:89] found id: ""
	I1014 15:03:49.984629   72639 logs.go:282] 0 containers: []
	W1014 15:03:49.984641   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:49.984649   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:49.984709   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:50.028049   72639 cri.go:89] found id: ""
	I1014 15:03:50.028072   72639 logs.go:282] 0 containers: []
	W1014 15:03:50.028083   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:50.028090   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:50.028166   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:50.062272   72639 cri.go:89] found id: ""
	I1014 15:03:50.062294   72639 logs.go:282] 0 containers: []
	W1014 15:03:50.062302   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:50.062308   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:50.062358   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:50.099722   72639 cri.go:89] found id: ""
	I1014 15:03:50.099750   72639 logs.go:282] 0 containers: []
	W1014 15:03:50.099762   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:50.099769   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:50.099830   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:50.139984   72639 cri.go:89] found id: ""
	I1014 15:03:50.140005   72639 logs.go:282] 0 containers: []
	W1014 15:03:50.140013   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:50.140020   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:50.140032   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:50.218467   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:50.218500   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:50.260600   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:50.260635   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:50.313725   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:50.313757   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:50.328431   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:50.328462   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:50.401334   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:52.901787   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:52.917836   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:52.917902   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:52.955387   72639 cri.go:89] found id: ""
	I1014 15:03:52.955418   72639 logs.go:282] 0 containers: []
	W1014 15:03:52.955431   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:52.955440   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:52.955504   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:52.990890   72639 cri.go:89] found id: ""
	I1014 15:03:52.990924   72639 logs.go:282] 0 containers: []
	W1014 15:03:52.990936   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:52.990945   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:52.991004   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:50.636581   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:53.137639   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:51.566403   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:54.066690   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:53.310174   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:55.809402   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:53.032344   72639 cri.go:89] found id: ""
	I1014 15:03:53.032374   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.032384   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:53.032390   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:53.032458   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:53.073501   72639 cri.go:89] found id: ""
	I1014 15:03:53.073527   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.073537   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:53.073544   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:53.073602   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:53.114273   72639 cri.go:89] found id: ""
	I1014 15:03:53.114307   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.114316   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:53.114334   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:53.114389   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:53.155448   72639 cri.go:89] found id: ""
	I1014 15:03:53.155475   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.155484   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:53.155490   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:53.155539   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:53.191304   72639 cri.go:89] found id: ""
	I1014 15:03:53.191338   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.191350   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:53.191357   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:53.191438   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:53.224664   72639 cri.go:89] found id: ""
	I1014 15:03:53.224691   72639 logs.go:282] 0 containers: []
	W1014 15:03:53.224702   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:53.224727   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:53.224744   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:53.275751   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:53.275786   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:53.289275   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:53.289303   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:53.369828   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:53.369855   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:53.369871   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:53.457248   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:53.457285   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:56.003384   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:56.017722   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:56.017782   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:56.056644   72639 cri.go:89] found id: ""
	I1014 15:03:56.056675   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.056686   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:56.056694   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:56.056757   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:56.094482   72639 cri.go:89] found id: ""
	I1014 15:03:56.094507   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.094517   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:56.094524   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:56.094583   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:56.129884   72639 cri.go:89] found id: ""
	I1014 15:03:56.129913   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.129921   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:56.129926   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:56.129974   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:56.167171   72639 cri.go:89] found id: ""
	I1014 15:03:56.167198   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.167206   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:56.167211   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:56.167264   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:56.204400   72639 cri.go:89] found id: ""
	I1014 15:03:56.204433   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.204442   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:56.204447   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:56.204494   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:56.240407   72639 cri.go:89] found id: ""
	I1014 15:03:56.240437   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.240448   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:56.240456   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:56.240517   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:56.277653   72639 cri.go:89] found id: ""
	I1014 15:03:56.277679   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.277687   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:56.277693   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:56.277738   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:56.313423   72639 cri.go:89] found id: ""
	I1014 15:03:56.313451   72639 logs.go:282] 0 containers: []
	W1014 15:03:56.313459   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:56.313468   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:56.313480   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:56.368094   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:56.368133   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:03:56.382563   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:56.382621   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:56.455106   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:56.455130   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:56.455144   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:56.532288   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:56.532329   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:55.636007   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:57.637196   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:56.566763   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:59.066227   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:58.309184   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:00.309370   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:03:59.072469   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:03:59.089024   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:03:59.089094   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:03:59.130798   72639 cri.go:89] found id: ""
	I1014 15:03:59.130829   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.130840   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:03:59.130848   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:03:59.130908   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:03:59.167828   72639 cri.go:89] found id: ""
	I1014 15:03:59.167854   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.167864   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:03:59.167871   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:03:59.167932   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:03:59.223482   72639 cri.go:89] found id: ""
	I1014 15:03:59.223509   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.223520   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:03:59.223528   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:03:59.223590   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:03:59.261186   72639 cri.go:89] found id: ""
	I1014 15:03:59.261231   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.261243   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:03:59.261251   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:03:59.261314   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:03:59.296924   72639 cri.go:89] found id: ""
	I1014 15:03:59.296985   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.297000   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:03:59.297008   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:03:59.297084   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:03:59.333891   72639 cri.go:89] found id: ""
	I1014 15:03:59.333915   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.333923   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:03:59.333929   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:03:59.333991   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:03:59.374106   72639 cri.go:89] found id: ""
	I1014 15:03:59.374134   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.374143   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:03:59.374150   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:03:59.374222   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:03:59.412256   72639 cri.go:89] found id: ""
	I1014 15:03:59.412283   72639 logs.go:282] 0 containers: []
	W1014 15:03:59.412291   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:03:59.412298   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:03:59.412308   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:03:59.492869   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:03:59.492904   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:03:59.492923   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:03:59.576441   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:03:59.576473   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:03:59.618638   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:03:59.618668   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:03:59.671295   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:03:59.671331   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:02.184689   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:02.197763   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:02.197833   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:02.231709   72639 cri.go:89] found id: ""
	I1014 15:04:02.231734   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.231746   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:02.231753   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:02.231815   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:02.269259   72639 cri.go:89] found id: ""
	I1014 15:04:02.269291   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.269303   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:02.269311   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:02.269390   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:02.305926   72639 cri.go:89] found id: ""
	I1014 15:04:02.305956   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.305967   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:02.305975   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:02.306034   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:02.349516   72639 cri.go:89] found id: ""
	I1014 15:04:02.349544   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.349557   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:02.349563   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:02.349622   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:02.388334   72639 cri.go:89] found id: ""
	I1014 15:04:02.388361   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.388371   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:02.388376   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:02.388428   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:02.422742   72639 cri.go:89] found id: ""
	I1014 15:04:02.422770   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.422781   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:02.422789   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:02.422850   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:02.463686   72639 cri.go:89] found id: ""
	I1014 15:04:02.463710   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.463718   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:02.463724   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:02.463770   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:02.498352   72639 cri.go:89] found id: ""
	I1014 15:04:02.498383   72639 logs.go:282] 0 containers: []
	W1014 15:04:02.498394   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:02.498404   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:02.498418   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:02.512531   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:02.512561   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:02.585331   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:02.585359   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:02.585373   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:02.667376   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:02.667414   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:02.708101   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:02.708133   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:00.136170   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:02.138198   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:01.566456   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:04.066934   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:02.309906   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:04.310009   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:06.310084   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:05.259839   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:05.273102   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:05.273186   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:05.311745   72639 cri.go:89] found id: ""
	I1014 15:04:05.311768   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.311776   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:05.311787   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:05.311834   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:05.349313   72639 cri.go:89] found id: ""
	I1014 15:04:05.349336   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.349344   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:05.349352   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:05.349416   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:05.388003   72639 cri.go:89] found id: ""
	I1014 15:04:05.388026   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.388034   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:05.388039   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:05.388098   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:05.426636   72639 cri.go:89] found id: ""
	I1014 15:04:05.426665   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.426676   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:05.426683   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:05.426745   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:05.461945   72639 cri.go:89] found id: ""
	I1014 15:04:05.461974   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.461983   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:05.461989   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:05.462049   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:05.497099   72639 cri.go:89] found id: ""
	I1014 15:04:05.497130   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.497142   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:05.497149   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:05.497216   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:05.531621   72639 cri.go:89] found id: ""
	I1014 15:04:05.531652   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.531664   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:05.531671   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:05.531729   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:05.568950   72639 cri.go:89] found id: ""
	I1014 15:04:05.568973   72639 logs.go:282] 0 containers: []
	W1014 15:04:05.568983   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:05.568992   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:05.569012   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:05.624806   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:05.624846   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:05.651912   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:05.651961   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:05.740342   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:05.740369   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:05.740384   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:05.817901   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:05.817932   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:04.636643   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:07.137525   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:06.566519   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:08.567458   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:08.809718   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:10.809968   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:08.360267   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:08.373249   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:08.373325   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:08.409485   72639 cri.go:89] found id: ""
	I1014 15:04:08.409520   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.409535   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:08.409542   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:08.409604   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:08.444977   72639 cri.go:89] found id: ""
	I1014 15:04:08.445000   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.445008   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:08.445014   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:08.445061   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:08.478080   72639 cri.go:89] found id: ""
	I1014 15:04:08.478108   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.478117   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:08.478123   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:08.478169   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:08.511510   72639 cri.go:89] found id: ""
	I1014 15:04:08.511536   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.511545   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:08.511552   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:08.511603   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:08.546260   72639 cri.go:89] found id: ""
	I1014 15:04:08.546285   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.546292   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:08.546299   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:08.546347   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:08.582775   72639 cri.go:89] found id: ""
	I1014 15:04:08.582799   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.582810   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:08.582816   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:08.582875   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:08.619208   72639 cri.go:89] found id: ""
	I1014 15:04:08.619231   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.619239   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:08.619244   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:08.619299   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:08.654823   72639 cri.go:89] found id: ""
	I1014 15:04:08.654849   72639 logs.go:282] 0 containers: []
	W1014 15:04:08.654860   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:08.654870   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:08.654885   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:08.704543   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:08.704574   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:08.718111   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:08.718144   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:08.792267   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:08.792290   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:08.792309   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:08.870178   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:08.870210   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:11.409975   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:11.432171   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:11.432243   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:11.468997   72639 cri.go:89] found id: ""
	I1014 15:04:11.469021   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.469030   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:11.469035   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:11.469094   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:11.504312   72639 cri.go:89] found id: ""
	I1014 15:04:11.504337   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.504346   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:11.504354   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:11.504417   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:11.540628   72639 cri.go:89] found id: ""
	I1014 15:04:11.540654   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.540662   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:11.540667   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:11.540729   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:11.576466   72639 cri.go:89] found id: ""
	I1014 15:04:11.576491   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.576498   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:11.576506   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:11.576550   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:11.611466   72639 cri.go:89] found id: ""
	I1014 15:04:11.611501   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.611512   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:11.611519   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:11.611578   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:11.650089   72639 cri.go:89] found id: ""
	I1014 15:04:11.650116   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.650126   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:11.650133   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:11.650191   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:11.686538   72639 cri.go:89] found id: ""
	I1014 15:04:11.686563   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.686571   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:11.686577   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:11.686654   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:11.725494   72639 cri.go:89] found id: ""
	I1014 15:04:11.725517   72639 logs.go:282] 0 containers: []
	W1014 15:04:11.725524   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:11.725532   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:11.725545   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:11.779062   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:11.779102   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:11.792726   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:11.792753   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:11.867945   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:11.867972   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:11.867986   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:11.952299   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:11.952340   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:09.636140   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:11.636455   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:14.136183   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:10.567626   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:13.065875   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:15.066484   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:13.310523   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:15.811094   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:14.493922   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:14.506754   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:14.506817   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:14.540456   72639 cri.go:89] found id: ""
	I1014 15:04:14.540480   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.540489   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:14.540495   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:14.540545   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:14.574819   72639 cri.go:89] found id: ""
	I1014 15:04:14.574843   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.574853   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:14.574859   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:14.574917   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:14.608834   72639 cri.go:89] found id: ""
	I1014 15:04:14.608859   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.608868   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:14.608873   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:14.608920   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:14.644182   72639 cri.go:89] found id: ""
	I1014 15:04:14.644210   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.644218   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:14.644223   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:14.644283   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:14.679113   72639 cri.go:89] found id: ""
	I1014 15:04:14.679145   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.679156   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:14.679164   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:14.679228   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:14.716111   72639 cri.go:89] found id: ""
	I1014 15:04:14.716142   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.716154   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:14.716167   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:14.716220   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:14.755884   72639 cri.go:89] found id: ""
	I1014 15:04:14.755907   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.755915   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:14.755920   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:14.755968   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:14.794167   72639 cri.go:89] found id: ""
	I1014 15:04:14.794195   72639 logs.go:282] 0 containers: []
	W1014 15:04:14.794207   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:14.794217   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:14.794234   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:14.844828   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:14.844864   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:14.859424   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:14.859451   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:14.936660   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:14.936687   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:14.936703   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:15.017034   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:15.017070   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:17.555604   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:17.570628   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:17.570687   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:17.612919   72639 cri.go:89] found id: ""
	I1014 15:04:17.612943   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.612951   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:17.612956   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:17.613002   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:17.651178   72639 cri.go:89] found id: ""
	I1014 15:04:17.651210   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.651220   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:17.651226   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:17.651278   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:17.687923   72639 cri.go:89] found id: ""
	I1014 15:04:17.687955   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.687966   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:17.687973   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:17.688024   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:17.724759   72639 cri.go:89] found id: ""
	I1014 15:04:17.724790   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.724800   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:17.724807   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:17.724866   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:17.760189   72639 cri.go:89] found id: ""
	I1014 15:04:17.760212   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.760220   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:17.760226   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:17.760274   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:17.797517   72639 cri.go:89] found id: ""
	I1014 15:04:17.797541   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.797549   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:17.797554   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:17.797601   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:17.833238   72639 cri.go:89] found id: ""
	I1014 15:04:17.833261   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.833270   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:17.833275   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:17.833321   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:17.868828   72639 cri.go:89] found id: ""
	I1014 15:04:17.868857   72639 logs.go:282] 0 containers: []
	W1014 15:04:17.868865   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:17.868873   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:17.868883   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:17.956972   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:17.957011   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:16.137357   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:18.636865   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:17.067415   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:19.566146   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:18.310380   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:20.809526   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:18.006354   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:18.006390   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:18.056237   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:18.056271   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:18.070763   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:18.070792   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:18.147471   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:20.648238   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:20.661465   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:20.661534   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:20.695869   72639 cri.go:89] found id: ""
	I1014 15:04:20.695894   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.695902   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:20.695907   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:20.695957   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:20.729271   72639 cri.go:89] found id: ""
	I1014 15:04:20.729295   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.729313   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:20.729319   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:20.729364   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:20.767110   72639 cri.go:89] found id: ""
	I1014 15:04:20.767137   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.767147   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:20.767154   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:20.767209   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:20.802752   72639 cri.go:89] found id: ""
	I1014 15:04:20.802781   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.802791   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:20.802798   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:20.802846   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:20.841958   72639 cri.go:89] found id: ""
	I1014 15:04:20.841987   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.841998   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:20.842005   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:20.842066   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:20.878869   72639 cri.go:89] found id: ""
	I1014 15:04:20.878896   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.878907   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:20.878914   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:20.878974   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:20.913802   72639 cri.go:89] found id: ""
	I1014 15:04:20.913838   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.913852   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:20.913861   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:20.913922   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:20.948350   72639 cri.go:89] found id: ""
	I1014 15:04:20.948378   72639 logs.go:282] 0 containers: []
	W1014 15:04:20.948395   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:20.948403   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:20.948416   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:21.001065   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:21.001098   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:21.014427   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:21.014458   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:21.091386   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:21.091412   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:21.091432   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:21.175255   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:21.175299   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:21.137358   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:23.636623   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:22.066415   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:24.066476   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:22.809925   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:25.309528   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:23.718260   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:23.732366   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:23.732445   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:23.767269   72639 cri.go:89] found id: ""
	I1014 15:04:23.767299   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.767311   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:23.767317   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:23.767379   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:23.808502   72639 cri.go:89] found id: ""
	I1014 15:04:23.808532   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.808543   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:23.808550   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:23.808606   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:23.845632   72639 cri.go:89] found id: ""
	I1014 15:04:23.845664   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.845677   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:23.845685   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:23.845753   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:23.880218   72639 cri.go:89] found id: ""
	I1014 15:04:23.880249   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.880261   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:23.880268   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:23.880332   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:23.915674   72639 cri.go:89] found id: ""
	I1014 15:04:23.915697   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.915705   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:23.915710   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:23.915767   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:23.950526   72639 cri.go:89] found id: ""
	I1014 15:04:23.950559   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.950570   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:23.950578   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:23.950656   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:23.986130   72639 cri.go:89] found id: ""
	I1014 15:04:23.986167   72639 logs.go:282] 0 containers: []
	W1014 15:04:23.986178   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:23.986186   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:23.986246   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:24.027112   72639 cri.go:89] found id: ""
	I1014 15:04:24.027141   72639 logs.go:282] 0 containers: []
	W1014 15:04:24.027154   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:24.027165   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:24.027181   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:24.082559   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:24.082610   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:24.096900   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:24.096929   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:24.173293   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:24.173327   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:24.173341   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:24.256921   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:24.256962   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:26.802073   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:26.817307   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:26.817366   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:26.855777   72639 cri.go:89] found id: ""
	I1014 15:04:26.855805   72639 logs.go:282] 0 containers: []
	W1014 15:04:26.855817   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:26.855825   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:26.855876   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:26.892260   72639 cri.go:89] found id: ""
	I1014 15:04:26.892288   72639 logs.go:282] 0 containers: []
	W1014 15:04:26.892300   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:26.892308   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:26.892369   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:26.931066   72639 cri.go:89] found id: ""
	I1014 15:04:26.931103   72639 logs.go:282] 0 containers: []
	W1014 15:04:26.931114   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:26.931122   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:26.931174   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:26.966890   72639 cri.go:89] found id: ""
	I1014 15:04:26.966923   72639 logs.go:282] 0 containers: []
	W1014 15:04:26.966933   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:26.966941   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:26.967002   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:27.001338   72639 cri.go:89] found id: ""
	I1014 15:04:27.001368   72639 logs.go:282] 0 containers: []
	W1014 15:04:27.001379   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:27.001386   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:27.001454   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:27.041798   72639 cri.go:89] found id: ""
	I1014 15:04:27.041830   72639 logs.go:282] 0 containers: []
	W1014 15:04:27.041839   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:27.041844   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:27.041905   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:27.080248   72639 cri.go:89] found id: ""
	I1014 15:04:27.080279   72639 logs.go:282] 0 containers: []
	W1014 15:04:27.080288   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:27.080293   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:27.080341   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:27.116207   72639 cri.go:89] found id: ""
	I1014 15:04:27.116234   72639 logs.go:282] 0 containers: []
	W1014 15:04:27.116242   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:27.116250   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:27.116264   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:27.191149   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:27.191174   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:27.191203   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:27.275771   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:27.275808   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:27.323223   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:27.323254   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:27.375409   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:27.375455   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:26.137156   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:28.637895   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:26.066790   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:28.565208   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:27.810315   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:30.309211   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:29.890408   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:29.904797   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:29.904853   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:29.938655   72639 cri.go:89] found id: ""
	I1014 15:04:29.938685   72639 logs.go:282] 0 containers: []
	W1014 15:04:29.938698   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:29.938705   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:29.938765   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:29.976477   72639 cri.go:89] found id: ""
	I1014 15:04:29.976508   72639 logs.go:282] 0 containers: []
	W1014 15:04:29.976519   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:29.976526   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:29.976583   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:30.014813   72639 cri.go:89] found id: ""
	I1014 15:04:30.014842   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.014853   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:30.014860   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:30.014926   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:30.050804   72639 cri.go:89] found id: ""
	I1014 15:04:30.050833   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.050844   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:30.050854   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:30.050918   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:30.087921   72639 cri.go:89] found id: ""
	I1014 15:04:30.087946   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.087954   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:30.087959   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:30.088016   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:30.125411   72639 cri.go:89] found id: ""
	I1014 15:04:30.125446   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.125458   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:30.125465   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:30.125519   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:30.162067   72639 cri.go:89] found id: ""
	I1014 15:04:30.162099   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.162110   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:30.162118   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:30.162181   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:30.200376   72639 cri.go:89] found id: ""
	I1014 15:04:30.200406   72639 logs.go:282] 0 containers: []
	W1014 15:04:30.200418   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:30.200435   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:30.200451   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:30.279965   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:30.279992   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:30.280007   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:30.364866   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:30.364900   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:30.408808   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:30.408842   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:30.464473   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:30.464507   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:32.980254   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:32.994254   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:32.994320   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:31.136531   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:33.137201   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:30.566228   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:32.567393   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:35.065955   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:32.810349   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:35.308794   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:33.035996   72639 cri.go:89] found id: ""
	I1014 15:04:33.036025   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.036036   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:33.036043   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:33.036103   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:33.077494   72639 cri.go:89] found id: ""
	I1014 15:04:33.077522   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.077531   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:33.077538   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:33.077585   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:33.112666   72639 cri.go:89] found id: ""
	I1014 15:04:33.112695   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.112705   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:33.112711   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:33.112772   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:33.150229   72639 cri.go:89] found id: ""
	I1014 15:04:33.150266   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.150276   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:33.150282   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:33.150336   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:33.186960   72639 cri.go:89] found id: ""
	I1014 15:04:33.186989   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.187001   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:33.187008   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:33.187062   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:33.223596   72639 cri.go:89] found id: ""
	I1014 15:04:33.223631   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.223641   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:33.223647   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:33.223711   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:33.260137   72639 cri.go:89] found id: ""
	I1014 15:04:33.260162   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.260170   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:33.260175   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:33.260228   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:33.298072   72639 cri.go:89] found id: ""
	I1014 15:04:33.298095   72639 logs.go:282] 0 containers: []
	W1014 15:04:33.298103   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:33.298110   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:33.298121   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:33.379587   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:33.379623   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:33.423427   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:33.423456   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:33.474644   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:33.474683   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:33.488324   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:33.488354   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:33.556257   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:36.056955   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:36.072461   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:36.072536   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:36.109467   72639 cri.go:89] found id: ""
	I1014 15:04:36.109493   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.109502   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:36.109509   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:36.109561   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:36.147985   72639 cri.go:89] found id: ""
	I1014 15:04:36.148012   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.148020   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:36.148025   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:36.148071   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:36.183885   72639 cri.go:89] found id: ""
	I1014 15:04:36.183906   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.183914   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:36.183919   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:36.183968   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:36.220994   72639 cri.go:89] found id: ""
	I1014 15:04:36.221025   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.221036   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:36.221044   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:36.221108   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:36.256586   72639 cri.go:89] found id: ""
	I1014 15:04:36.256612   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.256621   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:36.256627   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:36.256683   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:36.293229   72639 cri.go:89] found id: ""
	I1014 15:04:36.293256   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.293265   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:36.293272   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:36.293339   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:36.329254   72639 cri.go:89] found id: ""
	I1014 15:04:36.329279   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.329290   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:36.329297   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:36.329357   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:36.366495   72639 cri.go:89] found id: ""
	I1014 15:04:36.366526   72639 logs.go:282] 0 containers: []
	W1014 15:04:36.366538   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:36.366548   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:36.366561   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:36.420985   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:36.421018   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:36.435532   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:36.435565   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:36.510459   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:36.510484   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:36.510499   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:36.593057   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:36.593094   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:35.637182   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:37.637348   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:37.066334   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:39.566950   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:37.309629   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:39.809500   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:39.138570   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:39.152280   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:39.152342   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:39.186647   72639 cri.go:89] found id: ""
	I1014 15:04:39.186676   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.186687   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:39.186694   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:39.186754   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:39.223560   72639 cri.go:89] found id: ""
	I1014 15:04:39.223586   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.223594   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:39.223599   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:39.223644   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:39.257835   72639 cri.go:89] found id: ""
	I1014 15:04:39.257867   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.257879   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:39.257886   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:39.257947   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:39.294656   72639 cri.go:89] found id: ""
	I1014 15:04:39.294684   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.294692   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:39.294699   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:39.294750   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:39.333474   72639 cri.go:89] found id: ""
	I1014 15:04:39.333503   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.333513   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:39.333520   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:39.333586   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:39.374385   72639 cri.go:89] found id: ""
	I1014 15:04:39.374414   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.374424   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:39.374435   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:39.374483   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:39.412856   72639 cri.go:89] found id: ""
	I1014 15:04:39.412888   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.412899   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:39.412906   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:39.412966   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:39.463087   72639 cri.go:89] found id: ""
	I1014 15:04:39.463115   72639 logs.go:282] 0 containers: []
	W1014 15:04:39.463127   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:39.463138   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:39.463154   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:39.514309   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:39.514342   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:39.528947   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:39.528972   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:39.603984   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:39.604004   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:39.604016   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:39.685053   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:39.685093   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:42.234178   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:42.247421   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:42.247497   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:42.288496   72639 cri.go:89] found id: ""
	I1014 15:04:42.288521   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.288529   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:42.288535   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:42.288588   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:42.324346   72639 cri.go:89] found id: ""
	I1014 15:04:42.324382   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.324394   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:42.324401   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:42.324469   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:42.362879   72639 cri.go:89] found id: ""
	I1014 15:04:42.362910   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.362922   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:42.362928   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:42.362991   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:42.399347   72639 cri.go:89] found id: ""
	I1014 15:04:42.399375   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.399383   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:42.399389   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:42.399473   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:42.434942   72639 cri.go:89] found id: ""
	I1014 15:04:42.434971   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.434990   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:42.434999   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:42.435063   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:42.470886   72639 cri.go:89] found id: ""
	I1014 15:04:42.470916   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.470928   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:42.470934   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:42.470994   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:42.510713   72639 cri.go:89] found id: ""
	I1014 15:04:42.510742   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.510752   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:42.510758   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:42.510820   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:42.544506   72639 cri.go:89] found id: ""
	I1014 15:04:42.544538   72639 logs.go:282] 0 containers: []
	W1014 15:04:42.544547   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:42.544559   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:42.544570   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:42.588658   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:42.588694   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:42.642165   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:42.642198   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:42.658073   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:42.658110   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:42.730486   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:42.730510   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:42.730524   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:39.637476   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:41.637715   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:44.137654   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:42.065534   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:44.066309   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:41.809932   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:44.309377   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:46.309699   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:45.307806   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:45.321664   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:45.321733   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:45.359670   72639 cri.go:89] found id: ""
	I1014 15:04:45.359697   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.359708   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:45.359715   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:45.359781   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:45.398673   72639 cri.go:89] found id: ""
	I1014 15:04:45.398703   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.398715   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:45.398722   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:45.398784   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:45.441656   72639 cri.go:89] found id: ""
	I1014 15:04:45.441685   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.441697   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:45.441705   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:45.441768   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:45.476159   72639 cri.go:89] found id: ""
	I1014 15:04:45.476188   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.476195   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:45.476201   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:45.476263   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:45.513776   72639 cri.go:89] found id: ""
	I1014 15:04:45.513807   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.513819   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:45.513828   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:45.513894   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:45.550336   72639 cri.go:89] found id: ""
	I1014 15:04:45.550371   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.550382   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:45.550388   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:45.550450   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:45.586668   72639 cri.go:89] found id: ""
	I1014 15:04:45.586697   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.586705   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:45.586711   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:45.586760   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:45.622530   72639 cri.go:89] found id: ""
	I1014 15:04:45.622559   72639 logs.go:282] 0 containers: []
	W1014 15:04:45.622568   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:45.622576   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:45.622589   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:45.674471   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:45.674504   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:45.690430   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:45.690463   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:45.772133   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:45.772165   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:45.772181   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:45.859835   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:45.859880   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:46.636239   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:48.637696   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:46.565440   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:48.569076   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:48.309788   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:50.310209   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:48.434011   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:48.448747   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:48.448826   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:48.493642   72639 cri.go:89] found id: ""
	I1014 15:04:48.493668   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.493680   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:48.493687   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:48.493747   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:48.530298   72639 cri.go:89] found id: ""
	I1014 15:04:48.530327   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.530336   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:48.530344   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:48.530403   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:48.566215   72639 cri.go:89] found id: ""
	I1014 15:04:48.566242   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.566252   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:48.566261   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:48.566325   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:48.604528   72639 cri.go:89] found id: ""
	I1014 15:04:48.604553   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.604561   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:48.604566   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:48.604616   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:48.646152   72639 cri.go:89] found id: ""
	I1014 15:04:48.646180   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.646191   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:48.646198   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:48.646257   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:48.682670   72639 cri.go:89] found id: ""
	I1014 15:04:48.682696   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.682704   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:48.682711   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:48.682762   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:48.722292   72639 cri.go:89] found id: ""
	I1014 15:04:48.722318   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.722326   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:48.722335   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:48.722400   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:48.762474   72639 cri.go:89] found id: ""
	I1014 15:04:48.762506   72639 logs.go:282] 0 containers: []
	W1014 15:04:48.762518   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:48.762528   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:48.762553   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:48.776628   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:48.776652   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:48.849904   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:48.849928   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:48.849941   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:48.927033   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:48.927068   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:48.970775   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:48.970807   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:51.521113   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:51.535318   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:51.535389   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:51.582631   72639 cri.go:89] found id: ""
	I1014 15:04:51.582658   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.582666   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:51.582671   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:51.582721   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:51.655323   72639 cri.go:89] found id: ""
	I1014 15:04:51.655362   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.655371   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:51.655376   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:51.655433   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:51.722837   72639 cri.go:89] found id: ""
	I1014 15:04:51.722863   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.722875   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:51.722882   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:51.722939   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:51.759917   72639 cri.go:89] found id: ""
	I1014 15:04:51.759946   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.759957   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:51.759963   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:51.760023   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:51.798656   72639 cri.go:89] found id: ""
	I1014 15:04:51.798689   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.798702   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:51.798711   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:51.798777   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:51.839285   72639 cri.go:89] found id: ""
	I1014 15:04:51.839312   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.839324   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:51.839334   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:51.839391   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:51.876997   72639 cri.go:89] found id: ""
	I1014 15:04:51.877028   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.877038   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:51.877045   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:51.877091   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:51.913991   72639 cri.go:89] found id: ""
	I1014 15:04:51.914020   72639 logs.go:282] 0 containers: []
	W1014 15:04:51.914028   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:51.914036   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:51.914046   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:51.993392   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:51.993427   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:52.039722   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:52.039756   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:52.090901   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:52.090937   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:52.105014   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:52.105052   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:52.175505   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:51.137343   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:53.636660   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:50.575054   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:53.067208   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:52.809933   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:54.810498   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:54.676549   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:54.690113   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:54.690204   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:54.726478   72639 cri.go:89] found id: ""
	I1014 15:04:54.726511   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.726523   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:54.726538   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:54.726611   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:54.764990   72639 cri.go:89] found id: ""
	I1014 15:04:54.765017   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.765025   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:54.765031   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:54.765095   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:54.804779   72639 cri.go:89] found id: ""
	I1014 15:04:54.804808   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.804819   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:54.804828   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:54.804886   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:54.848657   72639 cri.go:89] found id: ""
	I1014 15:04:54.848682   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.848698   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:54.848705   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:54.848765   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:54.886806   72639 cri.go:89] found id: ""
	I1014 15:04:54.886834   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.886845   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:54.886853   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:54.886912   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:54.923297   72639 cri.go:89] found id: ""
	I1014 15:04:54.923323   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.923330   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:54.923335   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:54.923380   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:54.966297   72639 cri.go:89] found id: ""
	I1014 15:04:54.966321   72639 logs.go:282] 0 containers: []
	W1014 15:04:54.966329   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:54.966334   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:54.966382   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:55.012047   72639 cri.go:89] found id: ""
	I1014 15:04:55.012071   72639 logs.go:282] 0 containers: []
	W1014 15:04:55.012079   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:55.012087   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:55.012097   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:55.066031   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:55.066063   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:55.080954   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:55.080981   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:55.159644   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:55.159670   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:55.159683   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:55.243303   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:55.243341   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:04:57.784555   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:04:57.799051   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:04:57.799132   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:04:57.841084   72639 cri.go:89] found id: ""
	I1014 15:04:57.841108   72639 logs.go:282] 0 containers: []
	W1014 15:04:57.841115   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:04:57.841121   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:04:57.841167   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:04:57.881510   72639 cri.go:89] found id: ""
	I1014 15:04:57.881542   72639 logs.go:282] 0 containers: []
	W1014 15:04:57.881555   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:04:57.881562   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:04:57.881624   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:04:57.916893   72639 cri.go:89] found id: ""
	I1014 15:04:57.916923   72639 logs.go:282] 0 containers: []
	W1014 15:04:57.916934   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:04:57.916940   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:04:57.916988   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:04:57.956991   72639 cri.go:89] found id: ""
	I1014 15:04:57.957023   72639 logs.go:282] 0 containers: []
	W1014 15:04:57.957036   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:04:57.957046   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:04:57.957118   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:04:57.993765   72639 cri.go:89] found id: ""
	I1014 15:04:57.993792   72639 logs.go:282] 0 containers: []
	W1014 15:04:57.993803   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:04:57.993809   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:04:57.993869   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:04:56.136994   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:58.137736   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:55.566021   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:57.567950   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:00.068276   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:57.310643   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:59.808898   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:04:58.032044   72639 cri.go:89] found id: ""
	I1014 15:04:58.032085   72639 logs.go:282] 0 containers: []
	W1014 15:04:58.032098   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:04:58.032107   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:04:58.032173   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:04:58.069733   72639 cri.go:89] found id: ""
	I1014 15:04:58.069754   72639 logs.go:282] 0 containers: []
	W1014 15:04:58.069762   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:04:58.069767   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:04:58.069813   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:04:58.105851   72639 cri.go:89] found id: ""
	I1014 15:04:58.105880   72639 logs.go:282] 0 containers: []
	W1014 15:04:58.105891   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:04:58.105901   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:04:58.105914   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:04:58.159922   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:04:58.159956   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:04:58.173779   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:04:58.173802   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:04:58.253551   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:04:58.253576   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:04:58.253591   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:04:58.342607   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:04:58.342647   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:00.884705   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:00.900147   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:00.900215   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:00.940372   72639 cri.go:89] found id: ""
	I1014 15:05:00.940402   72639 logs.go:282] 0 containers: []
	W1014 15:05:00.940413   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:00.940420   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:00.940489   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:00.981400   72639 cri.go:89] found id: ""
	I1014 15:05:00.981431   72639 logs.go:282] 0 containers: []
	W1014 15:05:00.981441   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:00.981447   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:00.981517   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:01.021981   72639 cri.go:89] found id: ""
	I1014 15:05:01.022002   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.022011   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:01.022016   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:01.022067   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:01.056976   72639 cri.go:89] found id: ""
	I1014 15:05:01.057005   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.057013   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:01.057020   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:01.057063   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:01.092702   72639 cri.go:89] found id: ""
	I1014 15:05:01.092732   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.092739   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:01.092745   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:01.092803   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:01.128861   72639 cri.go:89] found id: ""
	I1014 15:05:01.128892   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.128902   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:01.128908   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:01.128958   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:01.162672   72639 cri.go:89] found id: ""
	I1014 15:05:01.162702   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.162712   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:01.162719   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:01.162791   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:01.202724   72639 cri.go:89] found id: ""
	I1014 15:05:01.202751   72639 logs.go:282] 0 containers: []
	W1014 15:05:01.202761   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:01.202770   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:01.202785   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:01.280702   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:01.280723   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:01.280735   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:01.362909   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:01.362943   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:01.406737   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:01.406766   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:01.460090   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:01.460125   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:00.636730   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:03.136587   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:02.568415   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:05.066568   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:01.809661   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:04.309079   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:06.309544   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:03.975661   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:03.989811   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:03.989874   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:04.028396   72639 cri.go:89] found id: ""
	I1014 15:05:04.028426   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.028438   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:04.028445   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:04.028499   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:04.065871   72639 cri.go:89] found id: ""
	I1014 15:05:04.065901   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.065912   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:04.065919   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:04.065980   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:04.103155   72639 cri.go:89] found id: ""
	I1014 15:05:04.103184   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.103192   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:04.103198   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:04.103248   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:04.139503   72639 cri.go:89] found id: ""
	I1014 15:05:04.139531   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.139539   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:04.139545   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:04.139601   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:04.171638   72639 cri.go:89] found id: ""
	I1014 15:05:04.171663   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.171671   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:04.171676   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:04.171734   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:04.213720   72639 cri.go:89] found id: ""
	I1014 15:05:04.213751   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.213760   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:04.213766   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:04.213815   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:04.248088   72639 cri.go:89] found id: ""
	I1014 15:05:04.248109   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.248117   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:04.248121   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:04.248183   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:04.286454   72639 cri.go:89] found id: ""
	I1014 15:05:04.286479   72639 logs.go:282] 0 containers: []
	W1014 15:05:04.286487   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:04.286495   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:04.286506   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:04.339564   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:04.339599   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:04.353034   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:04.353061   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:04.432764   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:04.432786   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:04.432797   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:04.514561   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:04.514613   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:07.057507   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:07.072798   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:07.072873   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:07.113672   72639 cri.go:89] found id: ""
	I1014 15:05:07.113694   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.113701   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:07.113706   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:07.113761   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:07.149321   72639 cri.go:89] found id: ""
	I1014 15:05:07.149348   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.149357   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:07.149362   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:07.149416   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:07.185717   72639 cri.go:89] found id: ""
	I1014 15:05:07.185748   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.185760   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:07.185768   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:07.185822   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:07.225747   72639 cri.go:89] found id: ""
	I1014 15:05:07.225772   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.225783   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:07.225791   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:07.225843   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:07.265834   72639 cri.go:89] found id: ""
	I1014 15:05:07.265864   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.265875   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:07.265882   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:07.265944   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:07.300595   72639 cri.go:89] found id: ""
	I1014 15:05:07.300622   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.300631   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:07.300637   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:07.300686   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:07.343249   72639 cri.go:89] found id: ""
	I1014 15:05:07.343280   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.343291   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:07.343298   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:07.343365   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:07.379525   72639 cri.go:89] found id: ""
	I1014 15:05:07.379549   72639 logs.go:282] 0 containers: []
	W1014 15:05:07.379557   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:07.379564   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:07.379576   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:07.393622   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:07.393653   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:07.473973   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:07.473998   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:07.474013   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:07.556937   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:07.556971   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:07.602224   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:07.602249   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:05.137157   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:07.137297   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:09.137708   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:07.066795   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:09.566723   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:08.809562   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:11.309821   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:10.156920   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:10.170971   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:10.171037   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:10.206568   72639 cri.go:89] found id: ""
	I1014 15:05:10.206610   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.206623   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:10.206630   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:10.206689   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:10.249075   72639 cri.go:89] found id: ""
	I1014 15:05:10.249101   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.249110   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:10.249121   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:10.249175   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:10.285620   72639 cri.go:89] found id: ""
	I1014 15:05:10.285649   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.285660   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:10.285667   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:10.285730   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:10.322291   72639 cri.go:89] found id: ""
	I1014 15:05:10.322314   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.322322   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:10.322327   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:10.322379   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:10.356691   72639 cri.go:89] found id: ""
	I1014 15:05:10.356720   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.356730   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:10.356738   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:10.356802   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:10.401192   72639 cri.go:89] found id: ""
	I1014 15:05:10.401223   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.401234   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:10.401242   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:10.401303   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:10.438198   72639 cri.go:89] found id: ""
	I1014 15:05:10.438225   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.438236   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:10.438243   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:10.438380   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:10.474142   72639 cri.go:89] found id: ""
	I1014 15:05:10.474166   72639 logs.go:282] 0 containers: []
	W1014 15:05:10.474174   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:10.474181   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:10.474193   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:10.546549   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:10.546569   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:10.546582   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:10.624235   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:10.624268   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:10.664896   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:10.664926   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:10.719425   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:10.719464   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:11.637824   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:14.139552   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:11.566806   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:14.066803   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:13.809728   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:16.310153   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:13.234162   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:13.247614   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:13.247689   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:13.285040   72639 cri.go:89] found id: ""
	I1014 15:05:13.285068   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.285078   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:13.285086   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:13.285154   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:13.334084   72639 cri.go:89] found id: ""
	I1014 15:05:13.334125   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.334133   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:13.334139   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:13.334204   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:13.369164   72639 cri.go:89] found id: ""
	I1014 15:05:13.369199   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.369211   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:13.369223   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:13.369285   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:13.405202   72639 cri.go:89] found id: ""
	I1014 15:05:13.405232   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.405244   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:13.405252   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:13.405304   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:13.443271   72639 cri.go:89] found id: ""
	I1014 15:05:13.443302   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.443311   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:13.443317   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:13.443369   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:13.483541   72639 cri.go:89] found id: ""
	I1014 15:05:13.483570   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.483580   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:13.483588   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:13.483650   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:13.518580   72639 cri.go:89] found id: ""
	I1014 15:05:13.518622   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.518633   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:13.518641   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:13.518701   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:13.553638   72639 cri.go:89] found id: ""
	I1014 15:05:13.553668   72639 logs.go:282] 0 containers: []
	W1014 15:05:13.553678   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:13.553688   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:13.553702   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:13.605379   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:13.605413   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:13.620525   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:13.620556   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:13.699628   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:13.699658   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:13.699672   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:13.778006   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:13.778046   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:16.316703   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:16.331511   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:16.331577   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:16.367045   72639 cri.go:89] found id: ""
	I1014 15:05:16.367075   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.367083   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:16.367089   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:16.367144   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:16.403240   72639 cri.go:89] found id: ""
	I1014 15:05:16.403264   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.403274   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:16.403285   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:16.403344   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:16.438570   72639 cri.go:89] found id: ""
	I1014 15:05:16.438612   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.438625   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:16.438632   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:16.438694   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:16.477153   72639 cri.go:89] found id: ""
	I1014 15:05:16.477174   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.477182   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:16.477187   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:16.477232   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:16.516308   72639 cri.go:89] found id: ""
	I1014 15:05:16.516336   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.516348   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:16.516355   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:16.516421   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:16.551337   72639 cri.go:89] found id: ""
	I1014 15:05:16.551365   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.551375   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:16.551383   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:16.551450   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:16.587073   72639 cri.go:89] found id: ""
	I1014 15:05:16.587105   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.587117   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:16.587125   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:16.587183   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:16.623940   72639 cri.go:89] found id: ""
	I1014 15:05:16.623962   72639 logs.go:282] 0 containers: []
	W1014 15:05:16.623970   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:16.623978   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:16.623989   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:16.671593   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:16.671618   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:16.723057   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:16.723092   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:16.737623   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:16.737656   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:16.809539   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:16.809569   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:16.809592   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:16.636818   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:18.637340   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:16.566523   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:19.065985   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:18.809554   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:21.309691   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:19.390406   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:19.404850   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:19.404928   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:19.446931   72639 cri.go:89] found id: ""
	I1014 15:05:19.446962   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.446973   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:19.446980   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:19.447043   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:19.488112   72639 cri.go:89] found id: ""
	I1014 15:05:19.488136   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.488144   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:19.488150   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:19.488208   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:19.523333   72639 cri.go:89] found id: ""
	I1014 15:05:19.523365   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.523382   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:19.523389   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:19.523447   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:19.557887   72639 cri.go:89] found id: ""
	I1014 15:05:19.557910   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.557918   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:19.557927   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:19.557972   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:19.593792   72639 cri.go:89] found id: ""
	I1014 15:05:19.593815   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.593822   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:19.593873   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:19.593922   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:19.628291   72639 cri.go:89] found id: ""
	I1014 15:05:19.628324   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.628335   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:19.628343   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:19.628405   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:19.664088   72639 cri.go:89] found id: ""
	I1014 15:05:19.664118   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.664130   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:19.664138   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:19.664211   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:19.700825   72639 cri.go:89] found id: ""
	I1014 15:05:19.700853   72639 logs.go:282] 0 containers: []
	W1014 15:05:19.700863   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:19.700873   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:19.700886   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:19.741631   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:19.741666   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:19.792667   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:19.792706   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:19.806928   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:19.806965   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:19.880030   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:19.880059   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:19.880073   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:22.465251   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:22.479031   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:22.479096   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:22.519123   72639 cri.go:89] found id: ""
	I1014 15:05:22.519147   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.519158   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:22.519171   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:22.519235   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:22.552250   72639 cri.go:89] found id: ""
	I1014 15:05:22.552277   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.552287   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:22.552294   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:22.552354   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:22.594213   72639 cri.go:89] found id: ""
	I1014 15:05:22.594243   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.594253   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:22.594261   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:22.594310   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:22.630081   72639 cri.go:89] found id: ""
	I1014 15:05:22.630110   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.630121   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:22.630129   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:22.630195   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:22.665454   72639 cri.go:89] found id: ""
	I1014 15:05:22.665485   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.665497   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:22.665505   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:22.665568   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:22.710697   72639 cri.go:89] found id: ""
	I1014 15:05:22.710725   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.710734   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:22.710742   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:22.710798   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:22.748486   72639 cri.go:89] found id: ""
	I1014 15:05:22.748516   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.748527   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:22.748534   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:22.748594   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:22.784646   72639 cri.go:89] found id: ""
	I1014 15:05:22.784674   72639 logs.go:282] 0 containers: []
	W1014 15:05:22.784684   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:22.784695   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:22.784709   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:22.797853   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:22.797880   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:22.875382   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:22.875406   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:22.875422   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:22.957055   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:22.957089   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:20.638448   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:23.137051   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:21.066950   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:23.566775   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:23.309958   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:25.810168   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:23.008642   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:23.008672   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:25.561277   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:25.575543   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:25.575606   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:25.614260   72639 cri.go:89] found id: ""
	I1014 15:05:25.614283   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.614291   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:25.614296   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:25.614353   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:25.654267   72639 cri.go:89] found id: ""
	I1014 15:05:25.654295   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.654307   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:25.654314   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:25.654385   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:25.707597   72639 cri.go:89] found id: ""
	I1014 15:05:25.707626   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.707637   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:25.707644   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:25.707707   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:25.747477   72639 cri.go:89] found id: ""
	I1014 15:05:25.747500   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.747508   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:25.747513   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:25.747571   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:25.785245   72639 cri.go:89] found id: ""
	I1014 15:05:25.785270   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.785279   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:25.785288   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:25.785342   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:25.820619   72639 cri.go:89] found id: ""
	I1014 15:05:25.820643   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.820651   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:25.820665   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:25.820722   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:25.861644   72639 cri.go:89] found id: ""
	I1014 15:05:25.861665   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.861673   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:25.861678   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:25.861724   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:25.901009   72639 cri.go:89] found id: ""
	I1014 15:05:25.901032   72639 logs.go:282] 0 containers: []
	W1014 15:05:25.901046   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:25.901056   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:25.901068   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:25.942918   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:25.942941   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:25.993931   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:25.993964   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:26.008252   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:26.008280   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:26.087316   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:26.087336   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:26.087347   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:25.636727   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:27.637053   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:26.066529   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:28.567224   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:28.308855   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:30.811310   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:28.667377   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:28.682586   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:28.682682   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:28.729576   72639 cri.go:89] found id: ""
	I1014 15:05:28.729600   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.729608   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:28.729614   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:28.729673   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:28.766637   72639 cri.go:89] found id: ""
	I1014 15:05:28.766669   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.766682   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:28.766690   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:28.766762   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:28.802280   72639 cri.go:89] found id: ""
	I1014 15:05:28.802308   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.802317   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:28.802322   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:28.802395   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:28.840788   72639 cri.go:89] found id: ""
	I1014 15:05:28.840822   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.840833   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:28.840841   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:28.840898   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:28.878403   72639 cri.go:89] found id: ""
	I1014 15:05:28.878437   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.878447   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:28.878453   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:28.878505   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:28.919054   72639 cri.go:89] found id: ""
	I1014 15:05:28.919082   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.919090   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:28.919096   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:28.919146   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:28.955097   72639 cri.go:89] found id: ""
	I1014 15:05:28.955124   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.955134   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:28.955142   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:28.955214   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:28.995681   72639 cri.go:89] found id: ""
	I1014 15:05:28.995711   72639 logs.go:282] 0 containers: []
	W1014 15:05:28.995722   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:28.995731   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:28.995746   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:29.073041   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:29.073066   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:29.073083   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:29.152803   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:29.152838   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:29.192205   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:29.192239   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:29.248128   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:29.248166   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:31.762647   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:31.776372   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:31.776454   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:31.812234   72639 cri.go:89] found id: ""
	I1014 15:05:31.812259   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.812268   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:31.812275   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:31.812347   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:31.850248   72639 cri.go:89] found id: ""
	I1014 15:05:31.850277   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.850294   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:31.850301   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:31.850363   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:31.887768   72639 cri.go:89] found id: ""
	I1014 15:05:31.887796   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.887808   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:31.887816   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:31.887870   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:31.923434   72639 cri.go:89] found id: ""
	I1014 15:05:31.923464   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.923476   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:31.923483   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:31.923547   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:31.961027   72639 cri.go:89] found id: ""
	I1014 15:05:31.961055   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.961066   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:31.961073   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:31.961135   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:31.996222   72639 cri.go:89] found id: ""
	I1014 15:05:31.996250   72639 logs.go:282] 0 containers: []
	W1014 15:05:31.996260   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:31.996267   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:31.996329   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:32.034396   72639 cri.go:89] found id: ""
	I1014 15:05:32.034441   72639 logs.go:282] 0 containers: []
	W1014 15:05:32.034452   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:32.034460   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:32.034528   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:32.080105   72639 cri.go:89] found id: ""
	I1014 15:05:32.080142   72639 logs.go:282] 0 containers: []
	W1014 15:05:32.080153   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:32.080164   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:32.080178   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:32.161120   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:32.161151   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:32.213511   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:32.213546   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:32.271250   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:32.271287   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:32.285452   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:32.285483   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:32.366108   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:30.136896   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:32.138906   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:31.066229   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:33.066370   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:35.067821   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:33.309846   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:35.310713   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:34.867317   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:34.882058   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:34.882125   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:34.926220   72639 cri.go:89] found id: ""
	I1014 15:05:34.926251   72639 logs.go:282] 0 containers: []
	W1014 15:05:34.926261   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:34.926268   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:34.926341   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:34.965657   72639 cri.go:89] found id: ""
	I1014 15:05:34.965691   72639 logs.go:282] 0 containers: []
	W1014 15:05:34.965702   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:34.965709   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:34.965775   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:35.002422   72639 cri.go:89] found id: ""
	I1014 15:05:35.002446   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.002454   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:35.002459   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:35.002523   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:35.040029   72639 cri.go:89] found id: ""
	I1014 15:05:35.040057   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.040067   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:35.040073   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:35.040137   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:35.077041   72639 cri.go:89] found id: ""
	I1014 15:05:35.077067   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.077075   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:35.077080   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:35.077129   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:35.113723   72639 cri.go:89] found id: ""
	I1014 15:05:35.113754   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.113763   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:35.113770   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:35.113854   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:35.152003   72639 cri.go:89] found id: ""
	I1014 15:05:35.152025   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.152033   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:35.152038   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:35.152084   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:35.186707   72639 cri.go:89] found id: ""
	I1014 15:05:35.186735   72639 logs.go:282] 0 containers: []
	W1014 15:05:35.186746   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:35.186756   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:35.186769   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:35.267899   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:35.267941   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:35.310382   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:35.310414   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:35.364811   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:35.364852   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:35.378359   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:35.378386   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:35.453522   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:37.953807   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:37.967515   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:37.967579   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:34.637257   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:37.137643   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:37.566344   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:39.566704   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:37.810414   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:40.308798   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:38.007923   72639 cri.go:89] found id: ""
	I1014 15:05:38.007955   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.007964   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:38.007969   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:38.008023   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:38.047451   72639 cri.go:89] found id: ""
	I1014 15:05:38.047476   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.047484   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:38.047490   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:38.047542   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:38.087141   72639 cri.go:89] found id: ""
	I1014 15:05:38.087165   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.087174   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:38.087186   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:38.087234   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:38.126556   72639 cri.go:89] found id: ""
	I1014 15:05:38.126583   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.126604   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:38.126612   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:38.126670   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:38.165318   72639 cri.go:89] found id: ""
	I1014 15:05:38.165341   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.165350   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:38.165356   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:38.165400   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:38.199498   72639 cri.go:89] found id: ""
	I1014 15:05:38.199533   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.199544   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:38.199553   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:38.199618   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:38.235030   72639 cri.go:89] found id: ""
	I1014 15:05:38.235058   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.235067   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:38.235072   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:38.235129   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:38.268900   72639 cri.go:89] found id: ""
	I1014 15:05:38.268926   72639 logs.go:282] 0 containers: []
	W1014 15:05:38.268935   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:38.268943   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:38.268957   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:38.282503   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:38.282532   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:38.357943   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:38.357972   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:38.357987   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:38.448417   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:38.448453   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:38.490023   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:38.490049   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:41.045691   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:41.061188   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:41.061251   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:41.102885   72639 cri.go:89] found id: ""
	I1014 15:05:41.102909   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.102917   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:41.102923   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:41.102971   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:41.139402   72639 cri.go:89] found id: ""
	I1014 15:05:41.139427   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.139437   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:41.139444   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:41.139501   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:41.179881   72639 cri.go:89] found id: ""
	I1014 15:05:41.179926   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.179939   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:41.179946   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:41.180008   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:41.215861   72639 cri.go:89] found id: ""
	I1014 15:05:41.215897   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.215910   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:41.215919   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:41.215987   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:41.251314   72639 cri.go:89] found id: ""
	I1014 15:05:41.251341   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.251351   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:41.251355   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:41.251404   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:41.285986   72639 cri.go:89] found id: ""
	I1014 15:05:41.286010   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.286017   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:41.286025   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:41.286071   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:41.323730   72639 cri.go:89] found id: ""
	I1014 15:05:41.323756   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.323764   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:41.323769   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:41.323816   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:41.360787   72639 cri.go:89] found id: ""
	I1014 15:05:41.360817   72639 logs.go:282] 0 containers: []
	W1014 15:05:41.360825   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:41.360834   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:41.360847   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:41.403137   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:41.403172   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:41.459217   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:41.459253   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:41.473529   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:41.473558   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:41.547384   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:41.547405   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:41.547416   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:39.637477   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:42.137176   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:41.569245   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:44.066760   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:42.309212   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:44.310281   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:44.129494   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:44.144061   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:44.144129   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:44.185872   72639 cri.go:89] found id: ""
	I1014 15:05:44.185896   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.185904   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:44.185909   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:44.185955   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:44.222618   72639 cri.go:89] found id: ""
	I1014 15:05:44.222648   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.222658   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:44.222663   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:44.222723   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:44.260730   72639 cri.go:89] found id: ""
	I1014 15:05:44.260761   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.260773   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:44.260780   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:44.260872   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:44.303033   72639 cri.go:89] found id: ""
	I1014 15:05:44.303124   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.303141   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:44.303150   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:44.303223   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:44.344573   72639 cri.go:89] found id: ""
	I1014 15:05:44.344600   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.344609   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:44.344614   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:44.344660   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:44.386091   72639 cri.go:89] found id: ""
	I1014 15:05:44.386122   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.386131   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:44.386137   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:44.386199   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:44.424609   72639 cri.go:89] found id: ""
	I1014 15:05:44.424634   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.424644   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:44.424656   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:44.424724   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:44.463997   72639 cri.go:89] found id: ""
	I1014 15:05:44.464023   72639 logs.go:282] 0 containers: []
	W1014 15:05:44.464033   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:44.464043   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:44.464057   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:44.516883   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:44.516921   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:44.530785   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:44.530820   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:44.605202   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:44.605229   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:44.605245   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:44.685277   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:44.685312   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:47.227851   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:47.242737   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:47.242817   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:47.279395   72639 cri.go:89] found id: ""
	I1014 15:05:47.279421   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.279428   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:47.279434   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:47.279495   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:47.315002   72639 cri.go:89] found id: ""
	I1014 15:05:47.315032   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.315043   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:47.315050   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:47.315120   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:47.354133   72639 cri.go:89] found id: ""
	I1014 15:05:47.354162   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.354173   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:47.354180   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:47.354245   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:47.389394   72639 cri.go:89] found id: ""
	I1014 15:05:47.389419   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.389427   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:47.389439   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:47.389498   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:47.426564   72639 cri.go:89] found id: ""
	I1014 15:05:47.426592   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.426619   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:47.426627   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:47.426676   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:47.466953   72639 cri.go:89] found id: ""
	I1014 15:05:47.466980   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.466989   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:47.466996   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:47.467065   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:47.508563   72639 cri.go:89] found id: ""
	I1014 15:05:47.508595   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.508605   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:47.508613   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:47.508665   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:47.548974   72639 cri.go:89] found id: ""
	I1014 15:05:47.549002   72639 logs.go:282] 0 containers: []
	W1014 15:05:47.549012   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:47.549022   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:47.549036   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:47.604768   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:47.604799   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:47.619681   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:47.619717   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:47.692479   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:47.692506   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:47.692522   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:47.773711   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:47.773751   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:44.637916   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:47.137070   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:46.566472   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:48.566743   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:46.809406   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:48.811359   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:51.309691   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:50.314509   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:50.330883   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:50.330958   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:50.375090   72639 cri.go:89] found id: ""
	I1014 15:05:50.375121   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.375133   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:50.375140   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:50.375201   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:50.415000   72639 cri.go:89] found id: ""
	I1014 15:05:50.415031   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.415041   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:50.415048   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:50.415099   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:50.453937   72639 cri.go:89] found id: ""
	I1014 15:05:50.453967   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.453976   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:50.453983   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:50.454047   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:50.498752   72639 cri.go:89] found id: ""
	I1014 15:05:50.498778   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.498785   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:50.498790   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:50.498858   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:50.537819   72639 cri.go:89] found id: ""
	I1014 15:05:50.537855   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.537864   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:50.537871   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:50.537920   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:50.577141   72639 cri.go:89] found id: ""
	I1014 15:05:50.577168   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.577179   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:50.577186   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:50.577250   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:50.612462   72639 cri.go:89] found id: ""
	I1014 15:05:50.612504   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.612527   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:50.612535   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:50.612597   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:50.648816   72639 cri.go:89] found id: ""
	I1014 15:05:50.648845   72639 logs.go:282] 0 containers: []
	W1014 15:05:50.648855   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:50.648866   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:50.648879   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:50.662546   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:50.662578   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:50.733128   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:50.733152   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:50.733166   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:50.810884   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:50.810913   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:50.855878   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:50.855905   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:49.637103   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:52.137615   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:50.567300   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:53.066883   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:53.810090   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:56.312861   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:53.413608   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:53.428380   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:53.428453   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:53.463440   72639 cri.go:89] found id: ""
	I1014 15:05:53.463464   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.463473   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:53.463479   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:53.463534   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:53.499024   72639 cri.go:89] found id: ""
	I1014 15:05:53.499050   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.499058   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:53.499064   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:53.499121   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:53.534396   72639 cri.go:89] found id: ""
	I1014 15:05:53.534425   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.534435   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:53.534442   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:53.534504   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:53.571396   72639 cri.go:89] found id: ""
	I1014 15:05:53.571422   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.571432   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:53.571439   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:53.571496   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:53.606219   72639 cri.go:89] found id: ""
	I1014 15:05:53.606247   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.606254   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:53.606260   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:53.606309   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:53.644906   72639 cri.go:89] found id: ""
	I1014 15:05:53.644929   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.644938   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:53.644945   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:53.645005   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:53.684764   72639 cri.go:89] found id: ""
	I1014 15:05:53.684795   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.684808   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:53.684817   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:53.684872   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:53.720559   72639 cri.go:89] found id: ""
	I1014 15:05:53.720587   72639 logs.go:282] 0 containers: []
	W1014 15:05:53.720596   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:53.720605   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:53.720626   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:53.773759   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:53.773798   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:53.787688   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:53.787717   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:53.863141   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:53.863163   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:53.863176   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:53.942949   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:53.942989   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:56.487207   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:56.500670   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:56.500730   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:56.533851   72639 cri.go:89] found id: ""
	I1014 15:05:56.533882   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.533894   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:56.533901   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:56.533964   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:56.573169   72639 cri.go:89] found id: ""
	I1014 15:05:56.573194   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.573201   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:56.573207   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:56.573260   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:56.608110   72639 cri.go:89] found id: ""
	I1014 15:05:56.608138   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.608151   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:56.608158   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:56.608218   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:56.646030   72639 cri.go:89] found id: ""
	I1014 15:05:56.646054   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.646061   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:56.646067   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:56.646112   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:56.689427   72639 cri.go:89] found id: ""
	I1014 15:05:56.689455   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.689465   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:56.689473   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:56.689528   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:56.723831   72639 cri.go:89] found id: ""
	I1014 15:05:56.723856   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.723865   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:56.723871   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:56.723928   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:56.756700   72639 cri.go:89] found id: ""
	I1014 15:05:56.756725   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.756734   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:56.756741   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:56.756808   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:56.788201   72639 cri.go:89] found id: ""
	I1014 15:05:56.788228   72639 logs.go:282] 0 containers: []
	W1014 15:05:56.788235   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:56.788242   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:05:56.788253   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:05:56.847840   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:56.847876   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:56.861984   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:56.862016   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:56.933190   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:56.933214   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:56.933226   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:05:57.015909   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:05:57.015958   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:05:54.636591   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:56.638712   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:59.137008   72173 pod_ready.go:103] pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:55.566153   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:57.566963   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:00.067261   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:58.810164   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:00.811078   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:05:59.559421   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:05:59.575593   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:05:59.575673   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:05:59.611369   72639 cri.go:89] found id: ""
	I1014 15:05:59.611399   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.611409   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:05:59.611416   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:05:59.611485   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:05:59.645786   72639 cri.go:89] found id: ""
	I1014 15:05:59.645817   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.645827   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:05:59.645834   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:05:59.645895   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:05:59.681463   72639 cri.go:89] found id: ""
	I1014 15:05:59.681491   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.681499   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:05:59.681504   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:05:59.681553   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:05:59.723738   72639 cri.go:89] found id: ""
	I1014 15:05:59.723767   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.723775   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:05:59.723782   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:05:59.723845   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:05:59.763890   72639 cri.go:89] found id: ""
	I1014 15:05:59.763919   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.763958   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:05:59.763966   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:05:59.764027   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:05:59.802981   72639 cri.go:89] found id: ""
	I1014 15:05:59.803007   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.803015   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:05:59.803021   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:05:59.803074   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:05:59.841887   72639 cri.go:89] found id: ""
	I1014 15:05:59.841916   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.841927   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:05:59.841934   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:05:59.841989   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:05:59.877190   72639 cri.go:89] found id: ""
	I1014 15:05:59.877221   72639 logs.go:282] 0 containers: []
	W1014 15:05:59.877231   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:05:59.877240   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:05:59.877254   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:05:59.890838   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:05:59.890864   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:05:59.970122   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:05:59.970147   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:05:59.970163   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:00.058994   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:00.059032   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:00.103227   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:00.103262   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:02.655437   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:02.671240   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:02.671307   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:02.708826   72639 cri.go:89] found id: ""
	I1014 15:06:02.708859   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.708871   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:02.708879   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:02.708943   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:02.744504   72639 cri.go:89] found id: ""
	I1014 15:06:02.744535   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.744546   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:02.744553   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:02.744615   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:02.781144   72639 cri.go:89] found id: ""
	I1014 15:06:02.781180   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.781193   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:02.781201   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:02.781281   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:02.819527   72639 cri.go:89] found id: ""
	I1014 15:06:02.819558   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.819567   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:02.819572   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:02.819630   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:02.855653   72639 cri.go:89] found id: ""
	I1014 15:06:02.855683   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.855693   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:02.855700   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:02.855761   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:02.900843   72639 cri.go:89] found id: ""
	I1014 15:06:02.900876   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.900888   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:02.900896   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:02.900961   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:02.941812   72639 cri.go:89] found id: ""
	I1014 15:06:02.941840   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.941851   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:02.941857   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:02.941919   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:02.980213   72639 cri.go:89] found id: ""
	I1014 15:06:02.980238   72639 logs.go:282] 0 containers: []
	W1014 15:06:02.980246   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:02.980253   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:02.980265   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:00.130683   72173 pod_ready.go:82] duration metric: took 4m0.000550021s for pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace to be "Ready" ...
	E1014 15:06:00.130707   72173 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-zc8zh" in "kube-system" namespace to be "Ready" (will not retry!)
	I1014 15:06:00.130723   72173 pod_ready.go:39] duration metric: took 4m13.708579322s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:06:00.130753   72173 kubeadm.go:597] duration metric: took 4m21.979284634s to restartPrimaryControlPlane
	W1014 15:06:00.130836   72173 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1014 15:06:00.130870   72173 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 15:06:02.566183   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:05.066638   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:03.309953   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:05.311484   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:03.034263   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:03.034301   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:03.048574   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:03.048606   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:03.121902   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:03.121925   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:03.121939   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:03.197407   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:03.197445   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:05.737723   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:05.751892   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:05.751959   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:05.789209   72639 cri.go:89] found id: ""
	I1014 15:06:05.789235   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.789242   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:05.789247   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:05.789294   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:05.826189   72639 cri.go:89] found id: ""
	I1014 15:06:05.826220   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.826229   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:05.826236   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:05.826344   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:05.864264   72639 cri.go:89] found id: ""
	I1014 15:06:05.864297   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.864308   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:05.864314   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:05.864371   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:05.899697   72639 cri.go:89] found id: ""
	I1014 15:06:05.899724   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.899732   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:05.899737   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:05.899784   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:05.939552   72639 cri.go:89] found id: ""
	I1014 15:06:05.939583   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.939593   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:05.939601   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:05.939668   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:05.999732   72639 cri.go:89] found id: ""
	I1014 15:06:05.999759   72639 logs.go:282] 0 containers: []
	W1014 15:06:05.999770   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:05.999776   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:05.999834   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:06.036228   72639 cri.go:89] found id: ""
	I1014 15:06:06.036259   72639 logs.go:282] 0 containers: []
	W1014 15:06:06.036276   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:06.036284   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:06.036343   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:06.071744   72639 cri.go:89] found id: ""
	I1014 15:06:06.071774   72639 logs.go:282] 0 containers: []
	W1014 15:06:06.071785   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:06.071795   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:06.071808   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:06.125737   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:06.125774   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:06.139150   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:06.139177   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:06.206731   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:06.206757   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:06.206773   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:06.287183   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:06.287218   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:07.565983   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:10.065897   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:07.809832   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:10.309290   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:08.827345   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:08.841290   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:08.841384   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:08.877789   72639 cri.go:89] found id: ""
	I1014 15:06:08.877815   72639 logs.go:282] 0 containers: []
	W1014 15:06:08.877824   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:08.877832   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:08.877895   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:08.912491   72639 cri.go:89] found id: ""
	I1014 15:06:08.912517   72639 logs.go:282] 0 containers: []
	W1014 15:06:08.912525   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:08.912530   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:08.912586   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:08.948727   72639 cri.go:89] found id: ""
	I1014 15:06:08.948755   72639 logs.go:282] 0 containers: []
	W1014 15:06:08.948765   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:08.948773   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:08.948837   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:08.984397   72639 cri.go:89] found id: ""
	I1014 15:06:08.984428   72639 logs.go:282] 0 containers: []
	W1014 15:06:08.984440   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:08.984448   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:08.984498   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:09.019222   72639 cri.go:89] found id: ""
	I1014 15:06:09.019250   72639 logs.go:282] 0 containers: []
	W1014 15:06:09.019260   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:09.019268   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:09.019329   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:09.058309   72639 cri.go:89] found id: ""
	I1014 15:06:09.058335   72639 logs.go:282] 0 containers: []
	W1014 15:06:09.058346   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:09.058353   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:09.058415   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:09.096508   72639 cri.go:89] found id: ""
	I1014 15:06:09.096535   72639 logs.go:282] 0 containers: []
	W1014 15:06:09.096544   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:09.096550   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:09.096599   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:09.134564   72639 cri.go:89] found id: ""
	I1014 15:06:09.134611   72639 logs.go:282] 0 containers: []
	W1014 15:06:09.134624   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:09.134635   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:09.134647   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:09.188220   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:09.188254   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:09.203119   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:09.203149   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:09.279357   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:09.279379   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:09.279390   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:09.364219   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:09.364253   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:11.910976   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:11.926067   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:11.926149   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:11.966238   72639 cri.go:89] found id: ""
	I1014 15:06:11.966271   72639 logs.go:282] 0 containers: []
	W1014 15:06:11.966282   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:11.966289   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:11.966350   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:12.002580   72639 cri.go:89] found id: ""
	I1014 15:06:12.002617   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.002630   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:12.002637   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:12.002698   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:12.037014   72639 cri.go:89] found id: ""
	I1014 15:06:12.037037   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.037046   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:12.037051   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:12.037111   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:12.070937   72639 cri.go:89] found id: ""
	I1014 15:06:12.070957   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.070965   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:12.070970   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:12.071019   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:12.104920   72639 cri.go:89] found id: ""
	I1014 15:06:12.104949   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.104960   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:12.104967   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:12.105026   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:12.142498   72639 cri.go:89] found id: ""
	I1014 15:06:12.142530   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.142544   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:12.142555   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:12.142628   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:12.179590   72639 cri.go:89] found id: ""
	I1014 15:06:12.179613   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.179621   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:12.179627   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:12.179675   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:12.213947   72639 cri.go:89] found id: ""
	I1014 15:06:12.213973   72639 logs.go:282] 0 containers: []
	W1014 15:06:12.213981   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:12.213989   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:12.213998   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:12.268214   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:12.268257   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:12.283561   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:12.283594   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:12.382344   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:12.382367   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:12.382377   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:12.469818   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:12.469854   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:12.066154   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:14.565962   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:12.310167   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:14.810273   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:15.011529   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:15.025355   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:15.025423   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:15.060996   72639 cri.go:89] found id: ""
	I1014 15:06:15.061028   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.061040   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:15.061047   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:15.061120   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:15.103050   72639 cri.go:89] found id: ""
	I1014 15:06:15.103074   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.103082   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:15.103088   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:15.103140   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:15.140095   72639 cri.go:89] found id: ""
	I1014 15:06:15.140122   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.140132   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:15.140139   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:15.140207   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:15.174612   72639 cri.go:89] found id: ""
	I1014 15:06:15.174642   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.174654   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:15.174669   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:15.174737   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:15.209116   72639 cri.go:89] found id: ""
	I1014 15:06:15.209142   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.209152   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:15.209160   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:15.209221   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:15.242857   72639 cri.go:89] found id: ""
	I1014 15:06:15.242885   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.242896   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:15.242902   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:15.242966   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:15.283038   72639 cri.go:89] found id: ""
	I1014 15:06:15.283066   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.283076   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:15.283083   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:15.283144   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:15.319577   72639 cri.go:89] found id: ""
	I1014 15:06:15.319604   72639 logs.go:282] 0 containers: []
	W1014 15:06:15.319612   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:15.319622   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:15.319636   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:15.391485   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:15.391506   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:15.391520   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:15.470140   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:15.470192   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:15.513098   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:15.513132   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:15.568275   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:15.568305   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:17.065956   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:19.566207   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:17.308463   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:19.309185   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:21.310841   72390 pod_ready.go:103] pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:18.085915   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:18.113889   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:18.113958   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:18.167486   72639 cri.go:89] found id: ""
	I1014 15:06:18.167511   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.167519   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:18.167525   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:18.167568   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:18.230244   72639 cri.go:89] found id: ""
	I1014 15:06:18.230273   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.230283   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:18.230291   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:18.230351   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:18.264223   72639 cri.go:89] found id: ""
	I1014 15:06:18.264252   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.264261   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:18.264268   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:18.264332   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:18.298719   72639 cri.go:89] found id: ""
	I1014 15:06:18.298750   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.298762   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:18.298770   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:18.298843   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:18.335113   72639 cri.go:89] found id: ""
	I1014 15:06:18.335140   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.335147   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:18.335153   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:18.335212   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:18.373690   72639 cri.go:89] found id: ""
	I1014 15:06:18.373721   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.373736   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:18.373743   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:18.373792   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:18.411138   72639 cri.go:89] found id: ""
	I1014 15:06:18.411171   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.411182   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:18.411190   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:18.411250   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:18.451281   72639 cri.go:89] found id: ""
	I1014 15:06:18.451306   72639 logs.go:282] 0 containers: []
	W1014 15:06:18.451314   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:18.451323   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:18.451334   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:18.502141   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:18.502178   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:18.517449   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:18.517476   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:18.586737   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:18.586760   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:18.586776   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:18.670234   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:18.670270   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:21.210200   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:21.222998   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:21.223053   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:21.257132   72639 cri.go:89] found id: ""
	I1014 15:06:21.257160   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.257167   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:06:21.257174   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:21.257237   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:21.290905   72639 cri.go:89] found id: ""
	I1014 15:06:21.290933   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.290945   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:06:21.290952   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:21.291007   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:21.331067   72639 cri.go:89] found id: ""
	I1014 15:06:21.331098   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.331108   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:06:21.331128   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:21.331178   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:21.370042   72639 cri.go:89] found id: ""
	I1014 15:06:21.370069   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.370077   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:06:21.370083   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:21.370141   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:21.414900   72639 cri.go:89] found id: ""
	I1014 15:06:21.414920   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.414932   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:06:21.414938   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:21.414985   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:21.452914   72639 cri.go:89] found id: ""
	I1014 15:06:21.452941   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.452952   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:06:21.452960   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:21.453022   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:21.486725   72639 cri.go:89] found id: ""
	I1014 15:06:21.486752   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.486763   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:21.486770   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:06:21.486831   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:06:21.524012   72639 cri.go:89] found id: ""
	I1014 15:06:21.524034   72639 logs.go:282] 0 containers: []
	W1014 15:06:21.524042   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:06:21.524049   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:21.524059   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:21.603238   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:06:21.603279   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:21.645655   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:21.645689   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:21.701053   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:21.701092   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:21.715515   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:21.715542   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:06:21.781831   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:06:22.067051   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:24.567173   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:21.810342   72390 pod_ready.go:82] duration metric: took 4m0.007657098s for pod "metrics-server-6867b74b74-bcrqs" in "kube-system" namespace to be "Ready" ...
	E1014 15:06:21.810365   72390 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1014 15:06:21.810382   72390 pod_ready.go:39] duration metric: took 4m7.92113061s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:06:21.810401   72390 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:06:21.810433   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:21.810488   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:21.856565   72390 cri.go:89] found id: "a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f"
	I1014 15:06:21.856587   72390 cri.go:89] found id: ""
	I1014 15:06:21.856594   72390 logs.go:282] 1 containers: [a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f]
	I1014 15:06:21.856654   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:21.861036   72390 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:21.861091   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:21.898486   72390 cri.go:89] found id: "0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69"
	I1014 15:06:21.898517   72390 cri.go:89] found id: ""
	I1014 15:06:21.898528   72390 logs.go:282] 1 containers: [0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69]
	I1014 15:06:21.898587   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:21.903145   72390 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:21.903245   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:21.941127   72390 cri.go:89] found id: "6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1"
	I1014 15:06:21.941164   72390 cri.go:89] found id: ""
	I1014 15:06:21.941173   72390 logs.go:282] 1 containers: [6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1]
	I1014 15:06:21.941232   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:21.945584   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:21.945658   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:21.994370   72390 cri.go:89] found id: "be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa"
	I1014 15:06:21.994398   72390 cri.go:89] found id: ""
	I1014 15:06:21.994407   72390 logs.go:282] 1 containers: [be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa]
	I1014 15:06:21.994454   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:21.998498   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:21.998547   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:22.037415   72390 cri.go:89] found id: "8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42"
	I1014 15:06:22.037443   72390 cri.go:89] found id: ""
	I1014 15:06:22.037453   72390 logs.go:282] 1 containers: [8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42]
	I1014 15:06:22.037507   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:22.041882   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:22.041947   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:22.079219   72390 cri.go:89] found id: "7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4"
	I1014 15:06:22.079243   72390 cri.go:89] found id: ""
	I1014 15:06:22.079252   72390 logs.go:282] 1 containers: [7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4]
	I1014 15:06:22.079319   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:22.083373   72390 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:22.083432   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:22.120795   72390 cri.go:89] found id: ""
	I1014 15:06:22.120818   72390 logs.go:282] 0 containers: []
	W1014 15:06:22.120825   72390 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:22.120832   72390 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1014 15:06:22.120889   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1014 15:06:22.158545   72390 cri.go:89] found id: "54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81"
	I1014 15:06:22.158571   72390 cri.go:89] found id: "48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076"
	I1014 15:06:22.158577   72390 cri.go:89] found id: ""
	I1014 15:06:22.158586   72390 logs.go:282] 2 containers: [54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81 48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076]
	I1014 15:06:22.158662   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:22.162500   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:22.166734   72390 logs.go:123] Gathering logs for storage-provisioner [48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076] ...
	I1014 15:06:22.166759   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076"
	I1014 15:06:22.202711   72390 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:22.202736   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:22.279594   72390 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:22.279635   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:22.293836   72390 logs.go:123] Gathering logs for coredns [6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1] ...
	I1014 15:06:22.293863   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1"
	I1014 15:06:22.335451   72390 logs.go:123] Gathering logs for kube-scheduler [be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa] ...
	I1014 15:06:22.335478   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa"
	I1014 15:06:22.374244   72390 logs.go:123] Gathering logs for kube-proxy [8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42] ...
	I1014 15:06:22.374274   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42"
	I1014 15:06:22.422538   72390 logs.go:123] Gathering logs for kube-controller-manager [7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4] ...
	I1014 15:06:22.422567   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4"
	I1014 15:06:22.486973   72390 logs.go:123] Gathering logs for storage-provisioner [54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81] ...
	I1014 15:06:22.487009   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81"
	I1014 15:06:22.528871   72390 logs.go:123] Gathering logs for container status ...
	I1014 15:06:22.528899   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:22.575947   72390 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:22.575982   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 15:06:22.713356   72390 logs.go:123] Gathering logs for kube-apiserver [a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f] ...
	I1014 15:06:22.713387   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f"
	I1014 15:06:22.760315   72390 logs.go:123] Gathering logs for etcd [0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69] ...
	I1014 15:06:22.760348   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69"
	I1014 15:06:22.811144   72390 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:22.811169   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:25.780847   72390 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:25.800698   72390 api_server.go:72] duration metric: took 4m18.640749756s to wait for apiserver process to appear ...
	I1014 15:06:25.800733   72390 api_server.go:88] waiting for apiserver healthz status ...
	I1014 15:06:25.800779   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:25.800845   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:25.841159   72390 cri.go:89] found id: "a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f"
	I1014 15:06:25.841193   72390 cri.go:89] found id: ""
	I1014 15:06:25.841203   72390 logs.go:282] 1 containers: [a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f]
	I1014 15:06:25.841259   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:25.845503   72390 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:25.845560   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:25.884122   72390 cri.go:89] found id: "0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69"
	I1014 15:06:25.884151   72390 cri.go:89] found id: ""
	I1014 15:06:25.884161   72390 logs.go:282] 1 containers: [0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69]
	I1014 15:06:25.884223   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:25.889638   72390 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:25.889700   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:25.931199   72390 cri.go:89] found id: "6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1"
	I1014 15:06:25.931220   72390 cri.go:89] found id: ""
	I1014 15:06:25.931230   72390 logs.go:282] 1 containers: [6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1]
	I1014 15:06:25.931285   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:25.936063   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:25.936127   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:25.979162   72390 cri.go:89] found id: "be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa"
	I1014 15:06:25.979188   72390 cri.go:89] found id: ""
	I1014 15:06:25.979197   72390 logs.go:282] 1 containers: [be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa]
	I1014 15:06:25.979254   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:25.983550   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:25.983611   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:26.021835   72390 cri.go:89] found id: "8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42"
	I1014 15:06:26.021854   72390 cri.go:89] found id: ""
	I1014 15:06:26.021862   72390 logs.go:282] 1 containers: [8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42]
	I1014 15:06:26.021911   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:26.026005   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:26.026073   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:26.067719   72390 cri.go:89] found id: "7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4"
	I1014 15:06:26.067740   72390 cri.go:89] found id: ""
	I1014 15:06:26.067749   72390 logs.go:282] 1 containers: [7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4]
	I1014 15:06:26.067803   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:26.073387   72390 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:26.073453   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:26.116305   72390 cri.go:89] found id: ""
	I1014 15:06:26.116336   72390 logs.go:282] 0 containers: []
	W1014 15:06:26.116349   72390 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:26.116358   72390 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1014 15:06:26.116427   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1014 15:06:26.156959   72390 cri.go:89] found id: "54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81"
	I1014 15:06:26.156985   72390 cri.go:89] found id: "48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076"
	I1014 15:06:26.156991   72390 cri.go:89] found id: ""
	I1014 15:06:26.156999   72390 logs.go:282] 2 containers: [54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81 48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076]
	I1014 15:06:26.157051   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:26.161437   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:26.165696   72390 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:26.165718   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 15:06:26.282026   72390 logs.go:123] Gathering logs for coredns [6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1] ...
	I1014 15:06:26.282056   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1"
	I1014 15:06:26.333504   72390 logs.go:123] Gathering logs for kube-scheduler [be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa] ...
	I1014 15:06:26.333543   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa"
	I1014 15:06:26.376435   72390 logs.go:123] Gathering logs for storage-provisioner [54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81] ...
	I1014 15:06:26.376469   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81"
	I1014 15:06:26.416633   72390 logs.go:123] Gathering logs for storage-provisioner [48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076] ...
	I1014 15:06:26.416660   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076"
	I1014 15:06:26.388546   72173 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.257645941s)
	I1014 15:06:26.388631   72173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:06:26.407118   72173 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:06:26.417718   72173 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:06:26.428364   72173 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:06:26.428391   72173 kubeadm.go:157] found existing configuration files:
	
	I1014 15:06:26.428451   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:06:26.437953   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:06:26.438026   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:06:26.448356   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:06:26.458476   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:06:26.458541   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:06:26.469941   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:06:26.482934   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:06:26.483016   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:06:26.495682   72173 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:06:26.506113   72173 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:06:26.506176   72173 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:06:26.517784   72173 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 15:06:26.568927   72173 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1014 15:06:26.568978   72173 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 15:06:26.685727   72173 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 15:06:26.685855   72173 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 15:06:26.685963   72173 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 15:06:26.693948   72173 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 15:06:26.696177   72173 out.go:235]   - Generating certificates and keys ...
	I1014 15:06:26.696269   72173 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 15:06:26.696318   72173 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 15:06:26.696388   72173 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 15:06:26.696438   72173 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1014 15:06:26.696495   72173 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 15:06:26.696536   72173 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1014 15:06:26.696588   72173 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1014 15:06:26.696639   72173 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1014 15:06:26.696696   72173 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 15:06:26.696760   72173 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 15:06:26.700275   72173 kubeadm.go:310] [certs] Using the existing "sa" key
	I1014 15:06:26.700406   72173 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 15:06:26.831734   72173 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 15:06:27.336318   72173 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 15:06:27.574604   72173 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 15:06:27.681370   72173 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 15:06:27.788769   72173 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 15:06:27.789324   72173 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 15:06:27.791842   72173 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 15:06:24.282018   72639 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:24.295177   72639 kubeadm.go:597] duration metric: took 4m4.450514459s to restartPrimaryControlPlane
	W1014 15:06:24.295255   72639 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1014 15:06:24.295283   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 15:06:27.793786   72173 out.go:235]   - Booting up control plane ...
	I1014 15:06:27.793891   72173 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 15:06:27.793980   72173 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 15:06:27.794089   72173 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 15:06:27.815223   72173 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 15:06:27.821764   72173 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 15:06:27.821817   72173 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 15:06:27.965327   72173 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 15:06:27.965707   72173 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 15:06:28.967332   72173 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001260991s
	I1014 15:06:28.967473   72173 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1014 15:06:29.238014   72639 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.942706631s)
	I1014 15:06:29.238096   72639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:06:29.258804   72639 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:06:29.269440   72639 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:06:29.279613   72639 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:06:29.279633   72639 kubeadm.go:157] found existing configuration files:
	
	I1014 15:06:29.279672   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:06:29.292840   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:06:29.292912   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:06:29.306987   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:06:29.319896   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:06:29.319970   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:06:29.333974   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:06:29.343993   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:06:29.344051   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:06:29.354691   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:06:29.364354   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:06:29.364422   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:06:29.374674   72639 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 15:06:29.452845   72639 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1014 15:06:29.452961   72639 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 15:06:29.618263   72639 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 15:06:29.618446   72639 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 15:06:29.618582   72639 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1014 15:06:29.813387   72639 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 15:06:29.815501   72639 out.go:235]   - Generating certificates and keys ...
	I1014 15:06:29.815610   72639 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 15:06:29.815697   72639 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 15:06:29.815799   72639 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 15:06:29.815879   72639 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1014 15:06:29.815971   72639 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 15:06:29.816039   72639 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1014 15:06:29.816125   72639 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1014 15:06:29.816206   72639 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1014 15:06:29.816307   72639 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 15:06:29.816404   72639 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 15:06:29.816454   72639 kubeadm.go:310] [certs] Using the existing "sa" key
	I1014 15:06:29.816531   72639 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 15:06:29.944505   72639 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 15:06:30.106467   72639 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 15:06:30.226356   72639 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 15:06:30.322169   72639 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 15:06:30.342382   72639 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 15:06:30.343666   72639 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 15:06:30.343736   72639 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 15:06:30.507000   72639 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 15:06:27.066923   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:29.068434   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:26.453659   72390 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:26.453693   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:26.900485   72390 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:26.900518   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:26.925431   72390 logs.go:123] Gathering logs for kube-apiserver [a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f] ...
	I1014 15:06:26.925461   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f"
	I1014 15:06:26.986104   72390 logs.go:123] Gathering logs for etcd [0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69] ...
	I1014 15:06:26.986140   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69"
	I1014 15:06:27.037557   72390 logs.go:123] Gathering logs for kube-proxy [8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42] ...
	I1014 15:06:27.037600   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42"
	I1014 15:06:27.084362   72390 logs.go:123] Gathering logs for kube-controller-manager [7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4] ...
	I1014 15:06:27.084397   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4"
	I1014 15:06:27.138680   72390 logs.go:123] Gathering logs for container status ...
	I1014 15:06:27.138713   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:27.191283   72390 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:27.191314   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:29.761781   72390 api_server.go:253] Checking apiserver healthz at https://192.168.50.128:8444/healthz ...
	I1014 15:06:29.769020   72390 api_server.go:279] https://192.168.50.128:8444/healthz returned 200:
	ok
	I1014 15:06:29.770210   72390 api_server.go:141] control plane version: v1.31.1
	I1014 15:06:29.770232   72390 api_server.go:131] duration metric: took 3.969490314s to wait for apiserver health ...
	I1014 15:06:29.770242   72390 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 15:06:29.770268   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:06:29.770328   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:06:29.827908   72390 cri.go:89] found id: "a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f"
	I1014 15:06:29.827930   72390 cri.go:89] found id: ""
	I1014 15:06:29.827939   72390 logs.go:282] 1 containers: [a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f]
	I1014 15:06:29.827994   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:29.837786   72390 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:06:29.837864   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:06:29.877625   72390 cri.go:89] found id: "0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69"
	I1014 15:06:29.877661   72390 cri.go:89] found id: ""
	I1014 15:06:29.877672   72390 logs.go:282] 1 containers: [0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69]
	I1014 15:06:29.877738   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:29.882502   72390 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:06:29.882578   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:06:29.923002   72390 cri.go:89] found id: "6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1"
	I1014 15:06:29.923027   72390 cri.go:89] found id: ""
	I1014 15:06:29.923037   72390 logs.go:282] 1 containers: [6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1]
	I1014 15:06:29.923094   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:29.927559   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:06:29.927621   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:06:29.966098   72390 cri.go:89] found id: "be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa"
	I1014 15:06:29.966124   72390 cri.go:89] found id: ""
	I1014 15:06:29.966133   72390 logs.go:282] 1 containers: [be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa]
	I1014 15:06:29.966189   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:29.972287   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:06:29.972371   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:06:30.024389   72390 cri.go:89] found id: "8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42"
	I1014 15:06:30.024414   72390 cri.go:89] found id: ""
	I1014 15:06:30.024423   72390 logs.go:282] 1 containers: [8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42]
	I1014 15:06:30.024481   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:30.029914   72390 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:06:30.029976   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:06:30.085703   72390 cri.go:89] found id: "7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4"
	I1014 15:06:30.085727   72390 cri.go:89] found id: ""
	I1014 15:06:30.085737   72390 logs.go:282] 1 containers: [7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4]
	I1014 15:06:30.085806   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:30.097004   72390 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:06:30.097098   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:06:30.147464   72390 cri.go:89] found id: ""
	I1014 15:06:30.147494   72390 logs.go:282] 0 containers: []
	W1014 15:06:30.147505   72390 logs.go:284] No container was found matching "kindnet"
	I1014 15:06:30.147512   72390 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1014 15:06:30.147573   72390 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1014 15:06:30.195003   72390 cri.go:89] found id: "54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81"
	I1014 15:06:30.195030   72390 cri.go:89] found id: "48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076"
	I1014 15:06:30.195036   72390 cri.go:89] found id: ""
	I1014 15:06:30.195045   72390 logs.go:282] 2 containers: [54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81 48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076]
	I1014 15:06:30.195099   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:30.199436   72390 ssh_runner.go:195] Run: which crictl
	I1014 15:06:30.204079   72390 logs.go:123] Gathering logs for dmesg ...
	I1014 15:06:30.204105   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 15:06:30.221021   72390 logs.go:123] Gathering logs for kube-apiserver [a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f] ...
	I1014 15:06:30.221049   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a2df52bb84059ae89b602f653e9d28f6fb0e7b2f9604024b6a3cb8e3819e251f"
	I1014 15:06:30.280979   72390 logs.go:123] Gathering logs for coredns [6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1] ...
	I1014 15:06:30.281013   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6e3748f01b40b78bc02973d9d42878e08a57c087b4396929e70607b36a22b0a1"
	I1014 15:06:30.339261   72390 logs.go:123] Gathering logs for kube-proxy [8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42] ...
	I1014 15:06:30.339291   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8562700fa08dc06dfa5ed0576c936b0a43d42be8606c358939d388f10bce7b42"
	I1014 15:06:30.390034   72390 logs.go:123] Gathering logs for kube-controller-manager [7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4] ...
	I1014 15:06:30.390081   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7cfcaa231ef94327055a0fc6f9bbbae20b89953d430d0da65429021d62b05ed4"
	I1014 15:06:30.461221   72390 logs.go:123] Gathering logs for storage-provisioner [54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81] ...
	I1014 15:06:30.461262   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54da9997e909c78f062c4e7ead07bc5a5bab770f2f8d2be9cc878d7abdb5ca81"
	I1014 15:06:30.504100   72390 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:06:30.504134   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:06:30.870561   72390 logs.go:123] Gathering logs for kubelet ...
	I1014 15:06:30.870629   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:06:30.942952   72390 logs.go:123] Gathering logs for container status ...
	I1014 15:06:30.942998   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:06:30.995435   72390 logs.go:123] Gathering logs for etcd [0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69] ...
	I1014 15:06:30.995484   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0aaa149381e5220a104c9d3a7c33decd30f2a2927070cde12c17198f7f4c6d69"
	I1014 15:06:31.038804   72390 logs.go:123] Gathering logs for kube-scheduler [be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa] ...
	I1014 15:06:31.038839   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be2f06f84e6b5f0f8bb4901b1e36acd7d79c1c009153d017601a35fc653efdaa"
	I1014 15:06:31.080187   72390 logs.go:123] Gathering logs for storage-provisioner [48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076] ...
	I1014 15:06:31.080218   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 48bc323790016363cbfe670226a909c6a72a3a117ad592bee60d280190023076"
	I1014 15:06:31.122248   72390 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:06:31.122295   72390 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 15:06:30.509157   72639 out.go:235]   - Booting up control plane ...
	I1014 15:06:30.509293   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 15:06:30.518440   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 15:06:30.520572   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 15:06:30.522337   72639 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 15:06:30.524996   72639 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1014 15:06:33.742510   72390 system_pods.go:59] 8 kube-system pods found
	I1014 15:06:33.742539   72390 system_pods.go:61] "coredns-7c65d6cfc9-994hx" [b0291ce4-5503-4bb1-8e36-d956b115c3ac] Running
	I1014 15:06:33.742546   72390 system_pods.go:61] "etcd-default-k8s-diff-port-201291" [5e359915-fb2e-46d5-a1a8-826341943fc3] Running
	I1014 15:06:33.742552   72390 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-201291" [047bd813-aaab-428e-ab47-12932195c91f] Running
	I1014 15:06:33.742557   72390 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-201291" [6eb0eb91-21ce-4e56-9758-fbd453b0d4df] Running
	I1014 15:06:33.742562   72390 system_pods.go:61] "kube-proxy-rh82t" [1dcd3c39-1bfe-40ac-a012-ea17ea1dfb6d] Running
	I1014 15:06:33.742566   72390 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-201291" [aaeefd23-6adc-4c69-acca-38e3f3172b2e] Running
	I1014 15:06:33.742576   72390 system_pods.go:61] "metrics-server-6867b74b74-bcrqs" [508697cd-cf31-4078-8985-5c0b77966695] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:06:33.742582   72390 system_pods.go:61] "storage-provisioner" [62925b5e-ec1d-4d5b-aa70-a4fc555db52d] Running
	I1014 15:06:33.742615   72390 system_pods.go:74] duration metric: took 3.972347536s to wait for pod list to return data ...
	I1014 15:06:33.742628   72390 default_sa.go:34] waiting for default service account to be created ...
	I1014 15:06:33.744532   72390 default_sa.go:45] found service account: "default"
	I1014 15:06:33.744551   72390 default_sa.go:55] duration metric: took 1.918153ms for default service account to be created ...
	I1014 15:06:33.744558   72390 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 15:06:33.750292   72390 system_pods.go:86] 8 kube-system pods found
	I1014 15:06:33.750315   72390 system_pods.go:89] "coredns-7c65d6cfc9-994hx" [b0291ce4-5503-4bb1-8e36-d956b115c3ac] Running
	I1014 15:06:33.750320   72390 system_pods.go:89] "etcd-default-k8s-diff-port-201291" [5e359915-fb2e-46d5-a1a8-826341943fc3] Running
	I1014 15:06:33.750324   72390 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-201291" [047bd813-aaab-428e-ab47-12932195c91f] Running
	I1014 15:06:33.750329   72390 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-201291" [6eb0eb91-21ce-4e56-9758-fbd453b0d4df] Running
	I1014 15:06:33.750332   72390 system_pods.go:89] "kube-proxy-rh82t" [1dcd3c39-1bfe-40ac-a012-ea17ea1dfb6d] Running
	I1014 15:06:33.750335   72390 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-201291" [aaeefd23-6adc-4c69-acca-38e3f3172b2e] Running
	I1014 15:06:33.750341   72390 system_pods.go:89] "metrics-server-6867b74b74-bcrqs" [508697cd-cf31-4078-8985-5c0b77966695] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:06:33.750346   72390 system_pods.go:89] "storage-provisioner" [62925b5e-ec1d-4d5b-aa70-a4fc555db52d] Running
	I1014 15:06:33.750352   72390 system_pods.go:126] duration metric: took 5.790549ms to wait for k8s-apps to be running ...
	I1014 15:06:33.750358   72390 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 15:06:33.750398   72390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:06:33.770342   72390 system_svc.go:56] duration metric: took 19.978034ms WaitForService to wait for kubelet
	I1014 15:06:33.770370   72390 kubeadm.go:582] duration metric: took 4m26.610427104s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 15:06:33.770392   72390 node_conditions.go:102] verifying NodePressure condition ...
	I1014 15:06:33.774149   72390 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 15:06:33.774176   72390 node_conditions.go:123] node cpu capacity is 2
	I1014 15:06:33.774190   72390 node_conditions.go:105] duration metric: took 3.792746ms to run NodePressure ...
	I1014 15:06:33.774203   72390 start.go:241] waiting for startup goroutines ...
	I1014 15:06:33.774217   72390 start.go:246] waiting for cluster config update ...
	I1014 15:06:33.774232   72390 start.go:255] writing updated cluster config ...
	I1014 15:06:33.774560   72390 ssh_runner.go:195] Run: rm -f paused
	I1014 15:06:33.823879   72390 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1014 15:06:33.825962   72390 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-201291" cluster and "default" namespace by default
	I1014 15:06:33.976430   72173 kubeadm.go:310] [api-check] The API server is healthy after 5.00773575s
	I1014 15:06:33.990496   72173 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 15:06:34.010821   72173 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 15:06:34.051244   72173 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 15:06:34.051513   72173 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-989166 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 15:06:34.066447   72173 kubeadm.go:310] [bootstrap-token] Using token: 46olqw.t0lfd7bmyz0olhbh
	I1014 15:06:34.067925   72173 out.go:235]   - Configuring RBAC rules ...
	I1014 15:06:34.068073   72173 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 15:06:34.077775   72173 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 15:06:34.097676   72173 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 15:06:34.103212   72173 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 15:06:34.112640   72173 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 15:06:34.119886   72173 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 15:06:34.382372   72173 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 15:06:34.825514   72173 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1014 15:06:35.383856   72173 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1014 15:06:35.383877   72173 kubeadm.go:310] 
	I1014 15:06:35.383939   72173 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1014 15:06:35.383976   72173 kubeadm.go:310] 
	I1014 15:06:35.384094   72173 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1014 15:06:35.384103   72173 kubeadm.go:310] 
	I1014 15:06:35.384136   72173 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1014 15:06:35.384223   72173 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 15:06:35.384286   72173 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 15:06:35.384311   72173 kubeadm.go:310] 
	I1014 15:06:35.384414   72173 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1014 15:06:35.384430   72173 kubeadm.go:310] 
	I1014 15:06:35.384499   72173 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 15:06:35.384512   72173 kubeadm.go:310] 
	I1014 15:06:35.384597   72173 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1014 15:06:35.384685   72173 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 15:06:35.384744   72173 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 15:06:35.384750   72173 kubeadm.go:310] 
	I1014 15:06:35.384821   72173 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 15:06:35.384928   72173 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1014 15:06:35.384940   72173 kubeadm.go:310] 
	I1014 15:06:35.385047   72173 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 46olqw.t0lfd7bmyz0olhbh \
	I1014 15:06:35.385192   72173 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 \
	I1014 15:06:35.385224   72173 kubeadm.go:310] 	--control-plane 
	I1014 15:06:35.385231   72173 kubeadm.go:310] 
	I1014 15:06:35.385322   72173 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1014 15:06:35.385334   72173 kubeadm.go:310] 
	I1014 15:06:35.385449   72173 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 46olqw.t0lfd7bmyz0olhbh \
	I1014 15:06:35.385588   72173 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 
	I1014 15:06:35.386604   72173 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 15:06:35.386674   72173 cni.go:84] Creating CNI manager for ""
	I1014 15:06:35.386689   72173 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:06:35.388617   72173 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 15:06:31.069009   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:33.565864   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:35.390017   72173 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 15:06:35.402242   72173 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 15:06:35.428958   72173 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 15:06:35.429016   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:35.429080   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-989166 minikube.k8s.io/updated_at=2024_10_14T15_06_35_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=embed-certs-989166 minikube.k8s.io/primary=true
	I1014 15:06:35.475775   72173 ops.go:34] apiserver oom_adj: -16
	I1014 15:06:35.645234   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:36.145613   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:36.646197   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:37.145401   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:37.645956   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:38.145978   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:38.645292   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:39.145444   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:39.646019   72173 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:06:39.869659   72173 kubeadm.go:1113] duration metric: took 4.440701402s to wait for elevateKubeSystemPrivileges
	I1014 15:06:39.869695   72173 kubeadm.go:394] duration metric: took 5m1.76989803s to StartCluster
	I1014 15:06:39.869713   72173 settings.go:142] acquiring lock: {Name:mk9f6f6b9dc8c3435472077cbb9091b8e648d1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:06:39.869797   72173 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 15:06:39.872564   72173 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/kubeconfig: {Name:mk17dd47b52fd7a8ee17f563c35b22ef1b7788f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:06:39.872947   72173 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.41 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 15:06:39.873165   72173 config.go:182] Loaded profile config "embed-certs-989166": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 15:06:39.873085   72173 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 15:06:39.873246   72173 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-989166"
	I1014 15:06:39.873256   72173 addons.go:69] Setting metrics-server=true in profile "embed-certs-989166"
	I1014 15:06:39.873273   72173 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-989166"
	I1014 15:06:39.873272   72173 addons.go:69] Setting default-storageclass=true in profile "embed-certs-989166"
	I1014 15:06:39.873319   72173 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-989166"
	W1014 15:06:39.873282   72173 addons.go:243] addon storage-provisioner should already be in state true
	I1014 15:06:39.873417   72173 host.go:66] Checking if "embed-certs-989166" exists ...
	I1014 15:06:39.873282   72173 addons.go:234] Setting addon metrics-server=true in "embed-certs-989166"
	W1014 15:06:39.873476   72173 addons.go:243] addon metrics-server should already be in state true
	I1014 15:06:39.873504   72173 host.go:66] Checking if "embed-certs-989166" exists ...
	I1014 15:06:39.873875   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.873888   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.873920   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.873947   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.873986   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.874050   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.874921   72173 out.go:177] * Verifying Kubernetes components...
	I1014 15:06:39.876972   72173 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:06:39.893341   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41669
	I1014 15:06:39.893367   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41843
	I1014 15:06:39.893341   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39139
	I1014 15:06:39.893905   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.893915   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.894023   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.894471   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.894493   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.894651   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.894677   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.894713   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.894731   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.894942   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.895073   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.895563   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.895593   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.895778   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.895970   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetState
	I1014 15:06:39.896249   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.896293   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.899661   72173 addons.go:234] Setting addon default-storageclass=true in "embed-certs-989166"
	W1014 15:06:39.899685   72173 addons.go:243] addon default-storageclass should already be in state true
	I1014 15:06:39.899714   72173 host.go:66] Checking if "embed-certs-989166" exists ...
	I1014 15:06:39.900088   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.900131   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.912591   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39805
	I1014 15:06:39.913089   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.913630   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.913652   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.914099   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.914287   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetState
	I1014 15:06:39.914839   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39111
	I1014 15:06:39.915288   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.915783   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.915802   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.916147   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.916171   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:06:39.916382   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetState
	I1014 15:06:39.917766   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:06:39.917796   72173 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:06:39.919192   72173 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1014 15:06:35.567508   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:38.065792   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:40.066618   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:39.919297   72173 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 15:06:39.919320   72173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 15:06:39.919339   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:06:39.920468   72173 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1014 15:06:39.920489   72173 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1014 15:06:39.920507   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:06:39.921603   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45255
	I1014 15:06:39.921970   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.922502   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.922525   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.922994   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.923333   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:06:39.923585   72173 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:06:39.923627   72173 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:06:39.923826   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:06:39.923846   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:06:39.923876   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:06:39.924028   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:06:39.924157   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:06:39.924270   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:06:39.924291   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:06:39.924310   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:06:39.924397   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:06:39.924674   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:06:39.924840   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:06:39.925027   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:06:39.925201   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:06:39.945435   72173 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40911
	I1014 15:06:39.945958   72173 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:06:39.946468   72173 main.go:141] libmachine: Using API Version  1
	I1014 15:06:39.946497   72173 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:06:39.946855   72173 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:06:39.947023   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetState
	I1014 15:06:39.948734   72173 main.go:141] libmachine: (embed-certs-989166) Calling .DriverName
	I1014 15:06:39.948924   72173 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 15:06:39.948942   72173 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 15:06:39.948966   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHHostname
	I1014 15:06:39.951019   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:06:39.951418   72173 main.go:141] libmachine: (embed-certs-989166) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ee:96:19", ip: ""} in network mk-embed-certs-989166: {Iface:virbr1 ExpiryTime:2024-10-14 16:01:24 +0000 UTC Type:0 Mac:52:54:00:ee:96:19 Iaid: IPaddr:192.168.39.41 Prefix:24 Hostname:embed-certs-989166 Clientid:01:52:54:00:ee:96:19}
	I1014 15:06:39.951437   72173 main.go:141] libmachine: (embed-certs-989166) DBG | domain embed-certs-989166 has defined IP address 192.168.39.41 and MAC address 52:54:00:ee:96:19 in network mk-embed-certs-989166
	I1014 15:06:39.951570   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHPort
	I1014 15:06:39.951742   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHKeyPath
	I1014 15:06:39.951918   72173 main.go:141] libmachine: (embed-certs-989166) Calling .GetSSHUsername
	I1014 15:06:39.952058   72173 sshutil.go:53] new ssh client: &{IP:192.168.39.41 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/embed-certs-989166/id_rsa Username:docker}
	I1014 15:06:40.129893   72173 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:06:40.215427   72173 node_ready.go:35] waiting up to 6m0s for node "embed-certs-989166" to be "Ready" ...
	I1014 15:06:40.224710   72173 node_ready.go:49] node "embed-certs-989166" has status "Ready":"True"
	I1014 15:06:40.224731   72173 node_ready.go:38] duration metric: took 9.266994ms for node "embed-certs-989166" to be "Ready" ...
	I1014 15:06:40.224742   72173 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:06:40.230651   72173 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:40.394829   72173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 15:06:40.422573   72173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 15:06:40.430300   72173 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1014 15:06:40.430319   72173 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1014 15:06:40.503826   72173 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1014 15:06:40.503857   72173 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1014 15:06:40.586087   72173 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 15:06:40.586116   72173 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1014 15:06:40.726605   72173 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 15:06:40.887453   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:40.887475   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:40.887809   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Closing plugin on server side
	I1014 15:06:40.887857   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:40.887869   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:40.887886   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:40.887898   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:40.888127   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:40.888150   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:40.888160   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Closing plugin on server side
	I1014 15:06:40.901694   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:40.901717   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:40.902091   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:40.902103   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Closing plugin on server side
	I1014 15:06:40.902111   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:41.352636   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:41.352670   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:41.352963   72173 main.go:141] libmachine: (embed-certs-989166) DBG | Closing plugin on server side
	I1014 15:06:41.353017   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:41.353029   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:41.353036   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:41.353043   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:41.353274   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:41.353302   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:41.578200   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:41.578219   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:41.578484   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:41.578529   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:41.578554   72173 main.go:141] libmachine: Making call to close driver server
	I1014 15:06:41.578588   72173 main.go:141] libmachine: (embed-certs-989166) Calling .Close
	I1014 15:06:41.578827   72173 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:06:41.578844   72173 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:06:41.578854   72173 addons.go:475] Verifying addon metrics-server=true in "embed-certs-989166"
	I1014 15:06:41.581312   72173 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1014 15:06:41.582506   72173 addons.go:510] duration metric: took 1.709432803s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1014 15:06:42.237265   72173 pod_ready.go:103] pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:44.240605   72173 pod_ready.go:103] pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:42.067701   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:44.566134   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:46.738094   72173 pod_ready.go:103] pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:48.739238   72173 pod_ready.go:103] pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:49.238145   72173 pod_ready.go:93] pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:49.238167   72173 pod_ready.go:82] duration metric: took 9.007493385s for pod "coredns-7c65d6cfc9-6bmwg" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.238176   72173 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-l95hj" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.243268   72173 pod_ready.go:93] pod "coredns-7c65d6cfc9-l95hj" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:49.243299   72173 pod_ready.go:82] duration metric: took 5.116183ms for pod "coredns-7c65d6cfc9-l95hj" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.243311   72173 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.247979   72173 pod_ready.go:93] pod "etcd-embed-certs-989166" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:49.248001   72173 pod_ready.go:82] duration metric: took 4.682826ms for pod "etcd-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.248009   72173 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.252590   72173 pod_ready.go:93] pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:49.252615   72173 pod_ready.go:82] duration metric: took 4.599399ms for pod "kube-apiserver-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.252624   72173 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.257541   72173 pod_ready.go:93] pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:49.257566   72173 pod_ready.go:82] duration metric: took 4.935116ms for pod "kube-controller-manager-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.257575   72173 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-g572s" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:47.064934   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:49.066284   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:49.635873   72173 pod_ready.go:93] pod "kube-proxy-g572s" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:49.635895   72173 pod_ready.go:82] duration metric: took 378.313947ms for pod "kube-proxy-g572s" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:49.635904   72173 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:50.035141   72173 pod_ready.go:93] pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace has status "Ready":"True"
	I1014 15:06:50.035169   72173 pod_ready.go:82] duration metric: took 399.257073ms for pod "kube-scheduler-embed-certs-989166" in "kube-system" namespace to be "Ready" ...
	I1014 15:06:50.035179   72173 pod_ready.go:39] duration metric: took 9.810424567s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:06:50.035195   72173 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:06:50.035258   72173 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:06:50.054964   72173 api_server.go:72] duration metric: took 10.181978114s to wait for apiserver process to appear ...
	I1014 15:06:50.054996   72173 api_server.go:88] waiting for apiserver healthz status ...
	I1014 15:06:50.055020   72173 api_server.go:253] Checking apiserver healthz at https://192.168.39.41:8443/healthz ...
	I1014 15:06:50.061606   72173 api_server.go:279] https://192.168.39.41:8443/healthz returned 200:
	ok
	I1014 15:06:50.063380   72173 api_server.go:141] control plane version: v1.31.1
	I1014 15:06:50.063411   72173 api_server.go:131] duration metric: took 8.40661ms to wait for apiserver health ...
	I1014 15:06:50.063421   72173 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 15:06:50.239258   72173 system_pods.go:59] 9 kube-system pods found
	I1014 15:06:50.239286   72173 system_pods.go:61] "coredns-7c65d6cfc9-6bmwg" [7cf9ad75-b75b-4cce-aad8-d68a810a5d0a] Running
	I1014 15:06:50.239292   72173 system_pods.go:61] "coredns-7c65d6cfc9-l95hj" [6563de05-ef49-4fa9-bf0b-a826fbc8bb14] Running
	I1014 15:06:50.239295   72173 system_pods.go:61] "etcd-embed-certs-989166" [8d29b784-a336-4cb9-ac24-3e9e129e4f49] Running
	I1014 15:06:50.239299   72173 system_pods.go:61] "kube-apiserver-embed-certs-989166" [a98c0a3d-0fd7-4f02-8d61-93f8cada740e] Running
	I1014 15:06:50.239303   72173 system_pods.go:61] "kube-controller-manager-embed-certs-989166" [e3146331-cd59-4a34-8ca8-c9637acdb687] Running
	I1014 15:06:50.239305   72173 system_pods.go:61] "kube-proxy-g572s" [5d2e4a08-5d05-48ab-8fbe-3bb3fe2f77ab] Running
	I1014 15:06:50.239308   72173 system_pods.go:61] "kube-scheduler-embed-certs-989166" [fd61dc8f-51aa-43ce-8e6b-8be0c50073fc] Running
	I1014 15:06:50.239314   72173 system_pods.go:61] "metrics-server-6867b74b74-jl6pp" [c244e53d-c492-426a-be7f-d405f2defd17] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:06:50.239317   72173 system_pods.go:61] "storage-provisioner" [ad6caa59-bc75-4e8f-8052-86d963b92fe3] Running
	I1014 15:06:50.239325   72173 system_pods.go:74] duration metric: took 175.89649ms to wait for pod list to return data ...
	I1014 15:06:50.239334   72173 default_sa.go:34] waiting for default service account to be created ...
	I1014 15:06:50.435980   72173 default_sa.go:45] found service account: "default"
	I1014 15:06:50.436007   72173 default_sa.go:55] duration metric: took 196.667838ms for default service account to be created ...
	I1014 15:06:50.436017   72173 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 15:06:50.639185   72173 system_pods.go:86] 9 kube-system pods found
	I1014 15:06:50.639224   72173 system_pods.go:89] "coredns-7c65d6cfc9-6bmwg" [7cf9ad75-b75b-4cce-aad8-d68a810a5d0a] Running
	I1014 15:06:50.639234   72173 system_pods.go:89] "coredns-7c65d6cfc9-l95hj" [6563de05-ef49-4fa9-bf0b-a826fbc8bb14] Running
	I1014 15:06:50.639241   72173 system_pods.go:89] "etcd-embed-certs-989166" [8d29b784-a336-4cb9-ac24-3e9e129e4f49] Running
	I1014 15:06:50.639248   72173 system_pods.go:89] "kube-apiserver-embed-certs-989166" [a98c0a3d-0fd7-4f02-8d61-93f8cada740e] Running
	I1014 15:06:50.639254   72173 system_pods.go:89] "kube-controller-manager-embed-certs-989166" [e3146331-cd59-4a34-8ca8-c9637acdb687] Running
	I1014 15:06:50.639262   72173 system_pods.go:89] "kube-proxy-g572s" [5d2e4a08-5d05-48ab-8fbe-3bb3fe2f77ab] Running
	I1014 15:06:50.639269   72173 system_pods.go:89] "kube-scheduler-embed-certs-989166" [fd61dc8f-51aa-43ce-8e6b-8be0c50073fc] Running
	I1014 15:06:50.639283   72173 system_pods.go:89] "metrics-server-6867b74b74-jl6pp" [c244e53d-c492-426a-be7f-d405f2defd17] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:06:50.639295   72173 system_pods.go:89] "storage-provisioner" [ad6caa59-bc75-4e8f-8052-86d963b92fe3] Running
	I1014 15:06:50.639309   72173 system_pods.go:126] duration metric: took 203.286322ms to wait for k8s-apps to be running ...
	I1014 15:06:50.639327   72173 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 15:06:50.639388   72173 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:06:50.655377   72173 system_svc.go:56] duration metric: took 16.0447ms WaitForService to wait for kubelet
	I1014 15:06:50.655402   72173 kubeadm.go:582] duration metric: took 10.782421893s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 15:06:50.655425   72173 node_conditions.go:102] verifying NodePressure condition ...
	I1014 15:06:50.835507   72173 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 15:06:50.835543   72173 node_conditions.go:123] node cpu capacity is 2
	I1014 15:06:50.835556   72173 node_conditions.go:105] duration metric: took 180.126755ms to run NodePressure ...
	I1014 15:06:50.835570   72173 start.go:241] waiting for startup goroutines ...
	I1014 15:06:50.835580   72173 start.go:246] waiting for cluster config update ...
	I1014 15:06:50.835594   72173 start.go:255] writing updated cluster config ...
	I1014 15:06:50.835924   72173 ssh_runner.go:195] Run: rm -f paused
	I1014 15:06:50.883737   72173 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1014 15:06:50.886200   72173 out.go:177] * Done! kubectl is now configured to use "embed-certs-989166" cluster and "default" namespace by default
	I1014 15:06:51.066344   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:53.566466   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:56.066734   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:06:58.567007   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:01.066112   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:03.068758   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:05.566174   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:07.566274   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:09.566829   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:10.525694   72639 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1014 15:07:10.526665   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:07:10.526908   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:07:12.066402   71679 pod_ready.go:103] pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:13.560638   71679 pod_ready.go:82] duration metric: took 4m0.000980901s for pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace to be "Ready" ...
	E1014 15:07:13.560669   71679 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-br4tl" in "kube-system" namespace to be "Ready" (will not retry!)
	I1014 15:07:13.560693   71679 pod_ready.go:39] duration metric: took 4m13.04495779s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:07:13.560725   71679 kubeadm.go:597] duration metric: took 4m21.006404411s to restartPrimaryControlPlane
	W1014 15:07:13.560791   71679 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1014 15:07:13.560823   71679 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 15:07:15.527128   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:07:15.527376   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:07:25.527779   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:07:25.528060   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:07:39.775370   71679 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.214519412s)
	I1014 15:07:39.775448   71679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:07:39.790736   71679 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 15:07:39.800575   71679 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:07:39.810380   71679 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:07:39.810402   71679 kubeadm.go:157] found existing configuration files:
	
	I1014 15:07:39.810462   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:07:39.819880   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:07:39.819938   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:07:39.830542   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:07:39.840268   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:07:39.840318   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:07:39.849727   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:07:39.858513   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:07:39.858651   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:07:39.869154   71679 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:07:39.878724   71679 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:07:39.878798   71679 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:07:39.888123   71679 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 15:07:39.942676   71679 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1014 15:07:39.942771   71679 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 15:07:40.060558   71679 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 15:07:40.060698   71679 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 15:07:40.060861   71679 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 15:07:40.076085   71679 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 15:07:40.078200   71679 out.go:235]   - Generating certificates and keys ...
	I1014 15:07:40.078301   71679 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 15:07:40.078381   71679 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 15:07:40.078505   71679 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 15:07:40.078620   71679 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1014 15:07:40.078717   71679 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 15:07:40.078794   71679 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1014 15:07:40.078887   71679 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1014 15:07:40.078973   71679 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1014 15:07:40.079069   71679 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 15:07:40.079161   71679 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 15:07:40.079234   71679 kubeadm.go:310] [certs] Using the existing "sa" key
	I1014 15:07:40.079315   71679 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 15:07:40.177082   71679 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 15:07:40.264965   71679 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 15:07:40.415660   71679 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 15:07:40.556759   71679 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 15:07:40.727152   71679 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 15:07:40.727573   71679 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 15:07:40.730409   71679 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 15:07:40.732204   71679 out.go:235]   - Booting up control plane ...
	I1014 15:07:40.732328   71679 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 15:07:40.732440   71679 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 15:07:40.732529   71679 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 15:07:40.751839   71679 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 15:07:40.758034   71679 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 15:07:40.758095   71679 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 15:07:40.895135   71679 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 15:07:40.895254   71679 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 15:07:41.397066   71679 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.194797ms
	I1014 15:07:41.397209   71679 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1014 15:07:46.401247   71679 kubeadm.go:310] [api-check] The API server is healthy after 5.002197966s
	I1014 15:07:46.419134   71679 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 15:07:46.433128   71679 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 15:07:46.477079   71679 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 15:07:46.477289   71679 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-813300 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 15:07:46.492703   71679 kubeadm.go:310] [bootstrap-token] Using token: 1vsv04.mf3pqj2ow157sq8h
	I1014 15:07:46.494314   71679 out.go:235]   - Configuring RBAC rules ...
	I1014 15:07:46.494467   71679 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 15:07:46.501090   71679 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 15:07:46.515987   71679 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 15:07:46.522417   71679 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 15:07:46.528612   71679 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 15:07:46.536975   71679 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 15:07:46.810642   71679 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 15:07:47.240531   71679 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1014 15:07:47.810279   71679 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1014 15:07:47.811169   71679 kubeadm.go:310] 
	I1014 15:07:47.811230   71679 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1014 15:07:47.811238   71679 kubeadm.go:310] 
	I1014 15:07:47.811307   71679 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1014 15:07:47.811312   71679 kubeadm.go:310] 
	I1014 15:07:47.811335   71679 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1014 15:07:47.811388   71679 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 15:07:47.811440   71679 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 15:07:47.811447   71679 kubeadm.go:310] 
	I1014 15:07:47.811501   71679 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1014 15:07:47.811507   71679 kubeadm.go:310] 
	I1014 15:07:47.811546   71679 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 15:07:47.811553   71679 kubeadm.go:310] 
	I1014 15:07:47.811600   71679 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1014 15:07:47.811667   71679 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 15:07:47.811755   71679 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 15:07:47.811771   71679 kubeadm.go:310] 
	I1014 15:07:47.811844   71679 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 15:07:47.811912   71679 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1014 15:07:47.811921   71679 kubeadm.go:310] 
	I1014 15:07:47.811999   71679 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 1vsv04.mf3pqj2ow157sq8h \
	I1014 15:07:47.812091   71679 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 \
	I1014 15:07:47.812139   71679 kubeadm.go:310] 	--control-plane 
	I1014 15:07:47.812153   71679 kubeadm.go:310] 
	I1014 15:07:47.812231   71679 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1014 15:07:47.812238   71679 kubeadm.go:310] 
	I1014 15:07:47.812306   71679 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 1vsv04.mf3pqj2ow157sq8h \
	I1014 15:07:47.812393   71679 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:4e1b3ed6b98151b035c29eec5bfc3dc1a4f697072e5aa720022df6fc4a333194 
	I1014 15:07:47.814071   71679 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 15:07:47.814103   71679 cni.go:84] Creating CNI manager for ""
	I1014 15:07:47.814113   71679 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 15:07:47.816033   71679 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1014 15:07:45.528527   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:07:45.528768   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:07:47.817325   71679 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1014 15:07:47.829639   71679 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1014 15:07:47.847797   71679 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 15:07:47.847857   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:47.847929   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-813300 minikube.k8s.io/updated_at=2024_10_14T15_07_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=no-preload-813300 minikube.k8s.io/primary=true
	I1014 15:07:48.039959   71679 ops.go:34] apiserver oom_adj: -16
	I1014 15:07:48.040095   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:48.540295   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:49.040911   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:49.540233   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:50.040146   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:50.540494   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:51.041033   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:51.540516   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:52.040935   71679 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 15:07:52.146854   71679 kubeadm.go:1113] duration metric: took 4.299055033s to wait for elevateKubeSystemPrivileges
	I1014 15:07:52.146890   71679 kubeadm.go:394] duration metric: took 4m59.642546726s to StartCluster
	I1014 15:07:52.146906   71679 settings.go:142] acquiring lock: {Name:mk9f6f6b9dc8c3435472077cbb9091b8e648d1b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:07:52.146987   71679 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 15:07:52.148782   71679 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-7836/kubeconfig: {Name:mk17dd47b52fd7a8ee17f563c35b22ef1b7788f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 15:07:52.149067   71679 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.61.13 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1014 15:07:52.149168   71679 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 15:07:52.149303   71679 addons.go:69] Setting storage-provisioner=true in profile "no-preload-813300"
	I1014 15:07:52.149333   71679 addons.go:234] Setting addon storage-provisioner=true in "no-preload-813300"
	I1014 15:07:52.149342   71679 config.go:182] Loaded profile config "no-preload-813300": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	W1014 15:07:52.149355   71679 addons.go:243] addon storage-provisioner should already be in state true
	I1014 15:07:52.149378   71679 addons.go:69] Setting default-storageclass=true in profile "no-preload-813300"
	I1014 15:07:52.149390   71679 host.go:66] Checking if "no-preload-813300" exists ...
	I1014 15:07:52.149412   71679 addons.go:69] Setting metrics-server=true in profile "no-preload-813300"
	I1014 15:07:52.149447   71679 addons.go:234] Setting addon metrics-server=true in "no-preload-813300"
	W1014 15:07:52.149461   71679 addons.go:243] addon metrics-server should already be in state true
	I1014 15:07:52.149494   71679 host.go:66] Checking if "no-preload-813300" exists ...
	I1014 15:07:52.149421   71679 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-813300"
	I1014 15:07:52.149748   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.149789   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.149861   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.149890   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.149905   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.149928   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.150482   71679 out.go:177] * Verifying Kubernetes components...
	I1014 15:07:52.152252   71679 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 15:07:52.167205   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38341
	I1014 15:07:52.170723   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45457
	I1014 15:07:52.170742   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.170728   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39829
	I1014 15:07:52.171111   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.171302   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.171321   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.171386   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.171678   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.171702   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.171717   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.171900   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.171916   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.172164   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.172243   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.172279   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.172325   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.172386   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetState
	I1014 15:07:52.172868   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.172916   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.175482   71679 addons.go:234] Setting addon default-storageclass=true in "no-preload-813300"
	W1014 15:07:52.175502   71679 addons.go:243] addon default-storageclass should already be in state true
	I1014 15:07:52.175529   71679 host.go:66] Checking if "no-preload-813300" exists ...
	I1014 15:07:52.175763   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.175792   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.190835   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46633
	I1014 15:07:52.191422   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.191767   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39613
	I1014 15:07:52.191901   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35293
	I1014 15:07:52.192010   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.192027   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.192317   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.192436   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.192481   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.192988   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.193010   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.192992   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.193060   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.193474   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.193524   71679 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 15:07:52.193530   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.193563   71679 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 15:07:52.193729   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetState
	I1014 15:07:52.193770   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetState
	I1014 15:07:52.195702   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:07:52.195770   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:07:52.197642   71679 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1014 15:07:52.197652   71679 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 15:07:52.198957   71679 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1014 15:07:52.198978   71679 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1014 15:07:52.198998   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:07:52.199075   71679 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 15:07:52.199096   71679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 15:07:52.199111   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:07:52.202637   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:07:52.203064   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:07:52.203088   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:07:52.203245   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:07:52.203515   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:07:52.203519   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:07:52.203663   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:07:52.203812   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:07:52.203878   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:07:52.203903   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:07:52.204187   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:07:52.204377   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:07:52.204535   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:07:52.204683   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:07:52.231332   71679 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38395
	I1014 15:07:52.231813   71679 main.go:141] libmachine: () Calling .GetVersion
	I1014 15:07:52.232320   71679 main.go:141] libmachine: Using API Version  1
	I1014 15:07:52.232344   71679 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 15:07:52.232645   71679 main.go:141] libmachine: () Calling .GetMachineName
	I1014 15:07:52.232836   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetState
	I1014 15:07:52.234309   71679 main.go:141] libmachine: (no-preload-813300) Calling .DriverName
	I1014 15:07:52.234570   71679 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 15:07:52.234585   71679 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 15:07:52.234622   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHHostname
	I1014 15:07:52.237749   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:07:52.238364   71679 main.go:141] libmachine: (no-preload-813300) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:86:40", ip: ""} in network mk-no-preload-813300: {Iface:virbr3 ExpiryTime:2024-10-14 16:02:23 +0000 UTC Type:0 Mac:52:54:00:ab:86:40 Iaid: IPaddr:192.168.61.13 Prefix:24 Hostname:no-preload-813300 Clientid:01:52:54:00:ab:86:40}
	I1014 15:07:52.238393   71679 main.go:141] libmachine: (no-preload-813300) DBG | domain no-preload-813300 has defined IP address 192.168.61.13 and MAC address 52:54:00:ab:86:40 in network mk-no-preload-813300
	I1014 15:07:52.238562   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHPort
	I1014 15:07:52.238744   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHKeyPath
	I1014 15:07:52.238903   71679 main.go:141] libmachine: (no-preload-813300) Calling .GetSSHUsername
	I1014 15:07:52.239031   71679 sshutil.go:53] new ssh client: &{IP:192.168.61.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/no-preload-813300/id_rsa Username:docker}
	I1014 15:07:52.375830   71679 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 15:07:52.401606   71679 node_ready.go:35] waiting up to 6m0s for node "no-preload-813300" to be "Ready" ...
	I1014 15:07:52.431363   71679 node_ready.go:49] node "no-preload-813300" has status "Ready":"True"
	I1014 15:07:52.431393   71679 node_ready.go:38] duration metric: took 29.758277ms for node "no-preload-813300" to be "Ready" ...
	I1014 15:07:52.431405   71679 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:07:52.446747   71679 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fjzn8" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:52.501642   71679 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1014 15:07:52.501664   71679 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1014 15:07:52.509733   71679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 15:07:52.515833   71679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 15:07:52.536485   71679 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1014 15:07:52.536508   71679 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1014 15:07:52.622269   71679 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 15:07:52.622299   71679 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1014 15:07:52.702873   71679 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 15:07:52.909827   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:52.909865   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:52.910194   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:52.910209   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:52.910235   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:52.910249   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:52.910510   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:52.910525   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:52.918161   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:52.918182   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:52.918473   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:52.918493   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:52.918480   71679 main.go:141] libmachine: (no-preload-813300) DBG | Closing plugin on server side
	I1014 15:07:53.707659   71679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.191781585s)
	I1014 15:07:53.707706   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:53.707719   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:53.708011   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:53.708035   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:53.708052   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:53.708062   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:53.708330   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:53.708346   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:54.060665   71679 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.357747934s)
	I1014 15:07:54.060752   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:54.060770   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:54.061069   71679 main.go:141] libmachine: (no-preload-813300) DBG | Closing plugin on server side
	I1014 15:07:54.061153   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:54.061164   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:54.061173   71679 main.go:141] libmachine: Making call to close driver server
	I1014 15:07:54.061184   71679 main.go:141] libmachine: (no-preload-813300) Calling .Close
	I1014 15:07:54.062712   71679 main.go:141] libmachine: (no-preload-813300) DBG | Closing plugin on server side
	I1014 15:07:54.062787   71679 main.go:141] libmachine: Successfully made call to close driver server
	I1014 15:07:54.062797   71679 main.go:141] libmachine: Making call to close connection to plugin binary
	I1014 15:07:54.062811   71679 addons.go:475] Verifying addon metrics-server=true in "no-preload-813300"
	I1014 15:07:54.064762   71679 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1014 15:07:54.066623   71679 addons.go:510] duration metric: took 1.917465271s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1014 15:07:54.454216   71679 pod_ready.go:103] pod "coredns-7c65d6cfc9-fjzn8" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:56.455649   71679 pod_ready.go:93] pod "coredns-7c65d6cfc9-fjzn8" in "kube-system" namespace has status "Ready":"True"
	I1014 15:07:56.455674   71679 pod_ready.go:82] duration metric: took 4.00889709s for pod "coredns-7c65d6cfc9-fjzn8" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:56.455689   71679 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nvpvl" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:58.461687   71679 pod_ready.go:103] pod "coredns-7c65d6cfc9-nvpvl" in "kube-system" namespace has status "Ready":"False"
	I1014 15:07:59.962360   71679 pod_ready.go:93] pod "coredns-7c65d6cfc9-nvpvl" in "kube-system" namespace has status "Ready":"True"
	I1014 15:07:59.962382   71679 pod_ready.go:82] duration metric: took 3.506686516s for pod "coredns-7c65d6cfc9-nvpvl" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.962391   71679 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.969241   71679 pod_ready.go:93] pod "etcd-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:07:59.969261   71679 pod_ready.go:82] duration metric: took 6.864356ms for pod "etcd-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.969270   71679 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.974810   71679 pod_ready.go:93] pod "kube-apiserver-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:07:59.974828   71679 pod_ready.go:82] duration metric: took 5.552122ms for pod "kube-apiserver-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.974837   71679 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.979555   71679 pod_ready.go:93] pod "kube-controller-manager-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:07:59.979580   71679 pod_ready.go:82] duration metric: took 4.735265ms for pod "kube-controller-manager-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.979592   71679 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-54rrd" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.985111   71679 pod_ready.go:93] pod "kube-proxy-54rrd" in "kube-system" namespace has status "Ready":"True"
	I1014 15:07:59.985138   71679 pod_ready.go:82] duration metric: took 5.538126ms for pod "kube-proxy-54rrd" in "kube-system" namespace to be "Ready" ...
	I1014 15:07:59.985150   71679 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:08:00.359524   71679 pod_ready.go:93] pod "kube-scheduler-no-preload-813300" in "kube-system" namespace has status "Ready":"True"
	I1014 15:08:00.359548   71679 pod_ready.go:82] duration metric: took 374.389838ms for pod "kube-scheduler-no-preload-813300" in "kube-system" namespace to be "Ready" ...
	I1014 15:08:00.359558   71679 pod_ready.go:39] duration metric: took 7.928141116s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 15:08:00.359575   71679 api_server.go:52] waiting for apiserver process to appear ...
	I1014 15:08:00.359626   71679 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 15:08:00.376115   71679 api_server.go:72] duration metric: took 8.22700683s to wait for apiserver process to appear ...
	I1014 15:08:00.376144   71679 api_server.go:88] waiting for apiserver healthz status ...
	I1014 15:08:00.376169   71679 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8443/healthz ...
	I1014 15:08:00.381225   71679 api_server.go:279] https://192.168.61.13:8443/healthz returned 200:
	ok
	I1014 15:08:00.382348   71679 api_server.go:141] control plane version: v1.31.1
	I1014 15:08:00.382377   71679 api_server.go:131] duration metric: took 6.225832ms to wait for apiserver health ...
	I1014 15:08:00.382386   71679 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 15:08:00.563350   71679 system_pods.go:59] 9 kube-system pods found
	I1014 15:08:00.563382   71679 system_pods.go:61] "coredns-7c65d6cfc9-fjzn8" [7850936e-8104-4e8f-a4cc-948579963790] Running
	I1014 15:08:00.563386   71679 system_pods.go:61] "coredns-7c65d6cfc9-nvpvl" [d926987d-9c61-4bf6-83e3-97334715e1d5] Running
	I1014 15:08:00.563390   71679 system_pods.go:61] "etcd-no-preload-813300" [e5895ac5-7829-4d8c-b5be-d621dbba78bd] Running
	I1014 15:08:00.563394   71679 system_pods.go:61] "kube-apiserver-no-preload-813300" [a30389db-98c0-49e3-8a9b-f3414e62c09a] Running
	I1014 15:08:00.563399   71679 system_pods.go:61] "kube-controller-manager-no-preload-813300" [f710bd35-f215-4aa1-96a9-fb5be44d04cc] Running
	I1014 15:08:00.563402   71679 system_pods.go:61] "kube-proxy-54rrd" [0c8ab0de-c204-46f5-a725-5dcd9eff59d8] Running
	I1014 15:08:00.563405   71679 system_pods.go:61] "kube-scheduler-no-preload-813300" [5386153a-f569-4332-b448-2a000f7a16bb] Running
	I1014 15:08:00.563412   71679 system_pods.go:61] "metrics-server-6867b74b74-8vfll" [cf3594da-9896-49ed-b47f-5bbea36c9aaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:08:00.563416   71679 system_pods.go:61] "storage-provisioner" [2d79bfdf-bda5-42bf-8ddf-73d7df4855db] Running
	I1014 15:08:00.563424   71679 system_pods.go:74] duration metric: took 181.032852ms to wait for pod list to return data ...
	I1014 15:08:00.563436   71679 default_sa.go:34] waiting for default service account to be created ...
	I1014 15:08:00.760054   71679 default_sa.go:45] found service account: "default"
	I1014 15:08:00.760084   71679 default_sa.go:55] duration metric: took 196.637678ms for default service account to be created ...
	I1014 15:08:00.760095   71679 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 15:08:00.962545   71679 system_pods.go:86] 9 kube-system pods found
	I1014 15:08:00.962577   71679 system_pods.go:89] "coredns-7c65d6cfc9-fjzn8" [7850936e-8104-4e8f-a4cc-948579963790] Running
	I1014 15:08:00.962583   71679 system_pods.go:89] "coredns-7c65d6cfc9-nvpvl" [d926987d-9c61-4bf6-83e3-97334715e1d5] Running
	I1014 15:08:00.962587   71679 system_pods.go:89] "etcd-no-preload-813300" [e5895ac5-7829-4d8c-b5be-d621dbba78bd] Running
	I1014 15:08:00.962591   71679 system_pods.go:89] "kube-apiserver-no-preload-813300" [a30389db-98c0-49e3-8a9b-f3414e62c09a] Running
	I1014 15:08:00.962605   71679 system_pods.go:89] "kube-controller-manager-no-preload-813300" [f710bd35-f215-4aa1-96a9-fb5be44d04cc] Running
	I1014 15:08:00.962609   71679 system_pods.go:89] "kube-proxy-54rrd" [0c8ab0de-c204-46f5-a725-5dcd9eff59d8] Running
	I1014 15:08:00.962613   71679 system_pods.go:89] "kube-scheduler-no-preload-813300" [5386153a-f569-4332-b448-2a000f7a16bb] Running
	I1014 15:08:00.962619   71679 system_pods.go:89] "metrics-server-6867b74b74-8vfll" [cf3594da-9896-49ed-b47f-5bbea36c9aaf] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 15:08:00.962623   71679 system_pods.go:89] "storage-provisioner" [2d79bfdf-bda5-42bf-8ddf-73d7df4855db] Running
	I1014 15:08:00.962633   71679 system_pods.go:126] duration metric: took 202.532202ms to wait for k8s-apps to be running ...
	I1014 15:08:00.962640   71679 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 15:08:00.962682   71679 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:08:00.980272   71679 system_svc.go:56] duration metric: took 17.624381ms WaitForService to wait for kubelet
	I1014 15:08:00.980310   71679 kubeadm.go:582] duration metric: took 8.831207019s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 15:08:00.980333   71679 node_conditions.go:102] verifying NodePressure condition ...
	I1014 15:08:01.160914   71679 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1014 15:08:01.160947   71679 node_conditions.go:123] node cpu capacity is 2
	I1014 15:08:01.160961   71679 node_conditions.go:105] duration metric: took 180.622279ms to run NodePressure ...
	I1014 15:08:01.160976   71679 start.go:241] waiting for startup goroutines ...
	I1014 15:08:01.160985   71679 start.go:246] waiting for cluster config update ...
	I1014 15:08:01.161000   71679 start.go:255] writing updated cluster config ...
	I1014 15:08:01.161357   71679 ssh_runner.go:195] Run: rm -f paused
	I1014 15:08:01.212486   71679 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1014 15:08:01.215083   71679 out.go:177] * Done! kubectl is now configured to use "no-preload-813300" cluster and "default" namespace by default
	I1014 15:08:25.530669   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:08:25.530970   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:08:25.530998   72639 kubeadm.go:310] 
	I1014 15:08:25.531059   72639 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1014 15:08:25.531114   72639 kubeadm.go:310] 		timed out waiting for the condition
	I1014 15:08:25.531125   72639 kubeadm.go:310] 
	I1014 15:08:25.531177   72639 kubeadm.go:310] 	This error is likely caused by:
	I1014 15:08:25.531238   72639 kubeadm.go:310] 		- The kubelet is not running
	I1014 15:08:25.531381   72639 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1014 15:08:25.531392   72639 kubeadm.go:310] 
	I1014 15:08:25.531527   72639 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1014 15:08:25.531587   72639 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1014 15:08:25.531633   72639 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1014 15:08:25.531643   72639 kubeadm.go:310] 
	I1014 15:08:25.531766   72639 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1014 15:08:25.531872   72639 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 15:08:25.531891   72639 kubeadm.go:310] 
	I1014 15:08:25.532038   72639 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1014 15:08:25.532174   72639 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 15:08:25.532281   72639 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1014 15:08:25.532377   72639 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1014 15:08:25.532418   72639 kubeadm.go:310] 
	I1014 15:08:25.532543   72639 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 15:08:25.532640   72639 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1014 15:08:25.532742   72639 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1014 15:08:25.532833   72639 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1014 15:08:25.532870   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1014 15:08:31.003635   72639 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.470741012s)
	I1014 15:08:31.003724   72639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 15:08:31.018666   72639 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 15:08:31.029707   72639 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 15:08:31.029729   72639 kubeadm.go:157] found existing configuration files:
	
	I1014 15:08:31.029776   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 15:08:31.039554   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 15:08:31.039625   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 15:08:31.049748   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 15:08:31.059618   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 15:08:31.059682   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 15:08:31.069369   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 15:08:31.078321   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 15:08:31.078385   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 15:08:31.088006   72639 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 15:08:31.096681   72639 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 15:08:31.096742   72639 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 15:08:31.106269   72639 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1014 15:08:31.182768   72639 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1014 15:08:31.182833   72639 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 15:08:31.341660   72639 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 15:08:31.341833   72639 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 15:08:31.342008   72639 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1014 15:08:31.538731   72639 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 15:08:31.540933   72639 out.go:235]   - Generating certificates and keys ...
	I1014 15:08:31.541037   72639 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 15:08:31.541124   72639 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 15:08:31.541270   72639 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1014 15:08:31.541386   72639 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1014 15:08:31.541486   72639 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1014 15:08:31.541559   72639 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1014 15:08:31.541663   72639 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1014 15:08:31.541750   72639 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1014 15:08:31.542000   72639 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1014 15:08:31.542534   72639 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1014 15:08:31.542627   72639 kubeadm.go:310] [certs] Using the existing "sa" key
	I1014 15:08:31.542711   72639 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 15:08:31.847005   72639 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 15:08:32.049586   72639 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 15:08:32.355652   72639 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 15:08:32.511031   72639 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 15:08:32.526310   72639 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 15:08:32.526755   72639 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 15:08:32.526841   72639 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 15:08:32.665898   72639 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 15:08:32.667688   72639 out.go:235]   - Booting up control plane ...
	I1014 15:08:32.667806   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 15:08:32.681232   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 15:08:32.682929   72639 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 15:08:32.683704   72639 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 15:08:32.685936   72639 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1014 15:09:12.687998   72639 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1014 15:09:12.688248   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:09:12.688517   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:09:17.689026   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:09:17.689213   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:09:27.689821   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:09:27.690119   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:09:47.690936   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:09:47.691185   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:10:27.691438   72639 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1014 15:10:27.691721   72639 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1014 15:10:27.691744   72639 kubeadm.go:310] 
	I1014 15:10:27.691779   72639 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1014 15:10:27.691847   72639 kubeadm.go:310] 		timed out waiting for the condition
	I1014 15:10:27.691867   72639 kubeadm.go:310] 
	I1014 15:10:27.691907   72639 kubeadm.go:310] 	This error is likely caused by:
	I1014 15:10:27.691972   72639 kubeadm.go:310] 		- The kubelet is not running
	I1014 15:10:27.692124   72639 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1014 15:10:27.692136   72639 kubeadm.go:310] 
	I1014 15:10:27.692253   72639 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1014 15:10:27.692311   72639 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1014 15:10:27.692352   72639 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1014 15:10:27.692363   72639 kubeadm.go:310] 
	I1014 15:10:27.692497   72639 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1014 15:10:27.692617   72639 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1014 15:10:27.692633   72639 kubeadm.go:310] 
	I1014 15:10:27.692787   72639 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1014 15:10:27.692915   72639 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1014 15:10:27.693051   72639 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1014 15:10:27.693146   72639 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1014 15:10:27.693158   72639 kubeadm.go:310] 
	I1014 15:10:27.693497   72639 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 15:10:27.693627   72639 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1014 15:10:27.693710   72639 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1014 15:10:27.693770   72639 kubeadm.go:394] duration metric: took 8m7.905137486s to StartCluster
	I1014 15:10:27.693808   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1014 15:10:27.693863   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 15:10:27.735373   72639 cri.go:89] found id: ""
	I1014 15:10:27.735410   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.735419   72639 logs.go:284] No container was found matching "kube-apiserver"
	I1014 15:10:27.735425   72639 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1014 15:10:27.735484   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 15:10:27.775691   72639 cri.go:89] found id: ""
	I1014 15:10:27.775713   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.775721   72639 logs.go:284] No container was found matching "etcd"
	I1014 15:10:27.775727   72639 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1014 15:10:27.775778   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 15:10:27.811621   72639 cri.go:89] found id: ""
	I1014 15:10:27.811645   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.811653   72639 logs.go:284] No container was found matching "coredns"
	I1014 15:10:27.811658   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1014 15:10:27.811718   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 15:10:27.850894   72639 cri.go:89] found id: ""
	I1014 15:10:27.850917   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.850925   72639 logs.go:284] No container was found matching "kube-scheduler"
	I1014 15:10:27.850931   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1014 15:10:27.850979   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 15:10:27.891559   72639 cri.go:89] found id: ""
	I1014 15:10:27.891596   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.891608   72639 logs.go:284] No container was found matching "kube-proxy"
	I1014 15:10:27.891616   72639 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 15:10:27.891671   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 15:10:27.929896   72639 cri.go:89] found id: ""
	I1014 15:10:27.929929   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.929942   72639 logs.go:284] No container was found matching "kube-controller-manager"
	I1014 15:10:27.930002   72639 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1014 15:10:27.930096   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 15:10:27.964801   72639 cri.go:89] found id: ""
	I1014 15:10:27.964828   72639 logs.go:282] 0 containers: []
	W1014 15:10:27.964839   72639 logs.go:284] No container was found matching "kindnet"
	I1014 15:10:27.964845   72639 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 15:10:27.964905   72639 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 15:10:28.011737   72639 cri.go:89] found id: ""
	I1014 15:10:28.011761   72639 logs.go:282] 0 containers: []
	W1014 15:10:28.011769   72639 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1014 15:10:28.011777   72639 logs.go:123] Gathering logs for describe nodes ...
	I1014 15:10:28.011788   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1014 15:10:28.088053   72639 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1014 15:10:28.088082   72639 logs.go:123] Gathering logs for CRI-O ...
	I1014 15:10:28.088098   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1014 15:10:28.214495   72639 logs.go:123] Gathering logs for container status ...
	I1014 15:10:28.214531   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 15:10:28.254766   72639 logs.go:123] Gathering logs for kubelet ...
	I1014 15:10:28.254796   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1014 15:10:28.304942   72639 logs.go:123] Gathering logs for dmesg ...
	I1014 15:10:28.304977   72639 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1014 15:10:28.319674   72639 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1014 15:10:28.319729   72639 out.go:270] * 
	W1014 15:10:28.319783   72639 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 15:10:28.319802   72639 out.go:270] * 
	W1014 15:10:28.320716   72639 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 15:10:28.324551   72639 out.go:201] 
	W1014 15:10:28.325905   72639 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1014 15:10:28.325940   72639 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1014 15:10:28.325985   72639 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1014 15:10:28.327473   72639 out.go:201] 
	
	
	==> CRI-O <==
	Oct 14 15:21:51 old-k8s-version-399767 crio[635]: time="2024-10-14 15:21:51.918412419Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919311918380716,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=529b0392-686a-4012-a1d2-efbda6a721c6 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:21:51 old-k8s-version-399767 crio[635]: time="2024-10-14 15:21:51.919380521Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9828fcd5-dad3-4a57-9394-910a4919661e name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:21:51 old-k8s-version-399767 crio[635]: time="2024-10-14 15:21:51.919447245Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9828fcd5-dad3-4a57-9394-910a4919661e name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:21:51 old-k8s-version-399767 crio[635]: time="2024-10-14 15:21:51.919497169Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9828fcd5-dad3-4a57-9394-910a4919661e name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:21:51 old-k8s-version-399767 crio[635]: time="2024-10-14 15:21:51.952528932Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bd698ba8-a7ce-43e7-a4d2-bc39afe83c61 name=/runtime.v1.RuntimeService/Version
	Oct 14 15:21:51 old-k8s-version-399767 crio[635]: time="2024-10-14 15:21:51.952637043Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bd698ba8-a7ce-43e7-a4d2-bc39afe83c61 name=/runtime.v1.RuntimeService/Version
	Oct 14 15:21:51 old-k8s-version-399767 crio[635]: time="2024-10-14 15:21:51.954154793Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=59ceaa40-f45d-45c5-9f7c-361333a7a8d0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:21:51 old-k8s-version-399767 crio[635]: time="2024-10-14 15:21:51.954583669Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919311954557033,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=59ceaa40-f45d-45c5-9f7c-361333a7a8d0 name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:21:51 old-k8s-version-399767 crio[635]: time="2024-10-14 15:21:51.955117892Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=19f111d8-abbb-46ce-b0de-a043ad711628 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:21:51 old-k8s-version-399767 crio[635]: time="2024-10-14 15:21:51.955168101Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=19f111d8-abbb-46ce-b0de-a043ad711628 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:21:51 old-k8s-version-399767 crio[635]: time="2024-10-14 15:21:51.955197279Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=19f111d8-abbb-46ce-b0de-a043ad711628 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:21:51 old-k8s-version-399767 crio[635]: time="2024-10-14 15:21:51.988326241Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c8456b71-e3ec-4503-a4e8-af3e102a53e7 name=/runtime.v1.RuntimeService/Version
	Oct 14 15:21:51 old-k8s-version-399767 crio[635]: time="2024-10-14 15:21:51.988453198Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c8456b71-e3ec-4503-a4e8-af3e102a53e7 name=/runtime.v1.RuntimeService/Version
	Oct 14 15:21:51 old-k8s-version-399767 crio[635]: time="2024-10-14 15:21:51.989597483Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1ce5b83e-4fff-4657-a135-22e1f155fc1a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:21:51 old-k8s-version-399767 crio[635]: time="2024-10-14 15:21:51.990015630Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919311989992009,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1ce5b83e-4fff-4657-a135-22e1f155fc1a name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:21:51 old-k8s-version-399767 crio[635]: time="2024-10-14 15:21:51.990628798Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6aa3510f-30de-469d-9300-803294935cf7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:21:51 old-k8s-version-399767 crio[635]: time="2024-10-14 15:21:51.990704461Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6aa3510f-30de-469d-9300-803294935cf7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:21:51 old-k8s-version-399767 crio[635]: time="2024-10-14 15:21:51.990764758Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6aa3510f-30de-469d-9300-803294935cf7 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:21:52 old-k8s-version-399767 crio[635]: time="2024-10-14 15:21:52.023861461Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=70d2fb78-bc66-4047-ba74-f874f04972b9 name=/runtime.v1.RuntimeService/Version
	Oct 14 15:21:52 old-k8s-version-399767 crio[635]: time="2024-10-14 15:21:52.023947192Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=70d2fb78-bc66-4047-ba74-f874f04972b9 name=/runtime.v1.RuntimeService/Version
	Oct 14 15:21:52 old-k8s-version-399767 crio[635]: time="2024-10-14 15:21:52.025259287Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f54cb79c-2db5-4480-97e4-9b57dca34fdc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:21:52 old-k8s-version-399767 crio[635]: time="2024-10-14 15:21:52.025676599Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728919312025656875,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f54cb79c-2db5-4480-97e4-9b57dca34fdc name=/runtime.v1.ImageService/ImageFsInfo
	Oct 14 15:21:52 old-k8s-version-399767 crio[635]: time="2024-10-14 15:21:52.026128286Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dce3467a-2b92-4e71-a52c-75271757de07 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:21:52 old-k8s-version-399767 crio[635]: time="2024-10-14 15:21:52.026205020Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dce3467a-2b92-4e71-a52c-75271757de07 name=/runtime.v1.RuntimeService/ListContainers
	Oct 14 15:21:52 old-k8s-version-399767 crio[635]: time="2024-10-14 15:21:52.026242923Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=dce3467a-2b92-4e71-a52c-75271757de07 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct14 15:01] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052051] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.050116] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Oct14 15:02] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.605075] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.701901] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.221397] systemd-fstab-generator[560]: Ignoring "noauto" option for root device
	[  +0.058897] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064336] systemd-fstab-generator[573]: Ignoring "noauto" option for root device
	[  +0.225460] systemd-fstab-generator[586]: Ignoring "noauto" option for root device
	[  +0.166157] systemd-fstab-generator[598]: Ignoring "noauto" option for root device
	[  +0.271984] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +6.642881] systemd-fstab-generator[879]: Ignoring "noauto" option for root device
	[  +0.070885] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.471808] systemd-fstab-generator[1003]: Ignoring "noauto" option for root device
	[ +13.079512] kauditd_printk_skb: 46 callbacks suppressed
	[Oct14 15:06] systemd-fstab-generator[5074]: Ignoring "noauto" option for root device
	[Oct14 15:08] systemd-fstab-generator[5361]: Ignoring "noauto" option for root device
	[  +0.073672] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 15:21:52 up 19 min,  0 users,  load average: 0.11, 0.06, 0.01
	Linux old-k8s-version-399767 5.10.207 #1 SMP Tue Oct 8 15:16:25 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Oct 14 15:21:52 old-k8s-version-399767 kubelet[6865]: bufio.(*Reader).Read(0xc0004b47e0, 0xc0000d8818, 0x9, 0x9, 0xc000d5adc8, 0x40a605, 0xc0001bb560)
	Oct 14 15:21:52 old-k8s-version-399767 kubelet[6865]:         /usr/local/go/src/bufio/bufio.go:227 +0x222
	Oct 14 15:21:52 old-k8s-version-399767 kubelet[6865]: io.ReadAtLeast(0x4f04880, 0xc0004b47e0, 0xc0000d8818, 0x9, 0x9, 0x9, 0xc000632810, 0x3f50d20, 0xc000996920)
	Oct 14 15:21:52 old-k8s-version-399767 kubelet[6865]:         /usr/local/go/src/io/io.go:314 +0x87
	Oct 14 15:21:52 old-k8s-version-399767 kubelet[6865]: io.ReadFull(...)
	Oct 14 15:21:52 old-k8s-version-399767 kubelet[6865]:         /usr/local/go/src/io/io.go:333
	Oct 14 15:21:52 old-k8s-version-399767 kubelet[6865]: k8s.io/kubernetes/vendor/golang.org/x/net/http2.readFrameHeader(0xc0000d8818, 0x9, 0x9, 0x4f04880, 0xc0004b47e0, 0x0, 0xc000000000, 0xc000996920, 0xc000c3e5b0)
	Oct 14 15:21:52 old-k8s-version-399767 kubelet[6865]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:237 +0x89
	Oct 14 15:21:52 old-k8s-version-399767 kubelet[6865]: k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc0000d87e0, 0xc000a67da0, 0x1, 0x0, 0x0)
	Oct 14 15:21:52 old-k8s-version-399767 kubelet[6865]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
	Oct 14 15:21:52 old-k8s-version-399767 kubelet[6865]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc0008b4a80)
	Oct 14 15:21:52 old-k8s-version-399767 kubelet[6865]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1265 +0x179
	Oct 14 15:21:52 old-k8s-version-399767 kubelet[6865]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Oct 14 15:21:52 old-k8s-version-399767 kubelet[6865]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Oct 14 15:21:52 old-k8s-version-399767 kubelet[6865]: goroutine 164 [runnable]:
	Oct 14 15:21:52 old-k8s-version-399767 kubelet[6865]: runtime.Gosched(...)
	Oct 14 15:21:52 old-k8s-version-399767 kubelet[6865]:         /usr/local/go/src/runtime/proc.go:271
	Oct 14 15:21:52 old-k8s-version-399767 kubelet[6865]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc0004b4900, 0x0, 0x0)
	Oct 14 15:21:52 old-k8s-version-399767 kubelet[6865]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:549 +0x1a5
	Oct 14 15:21:52 old-k8s-version-399767 kubelet[6865]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0008b4a80)
	Oct 14 15:21:52 old-k8s-version-399767 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Oct 14 15:21:52 old-k8s-version-399767 kubelet[6865]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Oct 14 15:21:52 old-k8s-version-399767 kubelet[6865]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Oct 14 15:21:52 old-k8s-version-399767 kubelet[6865]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Oct 14 15:21:52 old-k8s-version-399767 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-399767 -n old-k8s-version-399767
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-399767 -n old-k8s-version-399767: exit status 2 (235.286404ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-399767" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (138.37s)

                                                
                                    

Test pass (247/319)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.18
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.1/json-events 5.88
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.13
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.59
22 TestOffline 56.96
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 134.43
31 TestAddons/serial/GCPAuth/Namespaces 0.13
32 TestAddons/serial/GCPAuth/PullSecret 9.48
34 TestAddons/parallel/Registry 18.71
36 TestAddons/parallel/InspektorGadget 10.76
39 TestAddons/parallel/CSI 52.69
40 TestAddons/parallel/Headlamp 18.07
41 TestAddons/parallel/CloudSpanner 5.59
42 TestAddons/parallel/LocalPath 52.12
43 TestAddons/parallel/NvidiaDevicePlugin 6.6
44 TestAddons/parallel/Yakd 11.86
47 TestCertOptions 98.02
48 TestCertExpiration 319.55
50 TestForceSystemdFlag 81.6
51 TestForceSystemdEnv 72.53
53 TestKVMDriverInstallOrUpdate 1.23
57 TestErrorSpam/setup 44.51
58 TestErrorSpam/start 0.35
59 TestErrorSpam/status 0.74
60 TestErrorSpam/pause 1.54
61 TestErrorSpam/unpause 1.74
62 TestErrorSpam/stop 6.12
65 TestFunctional/serial/CopySyncFile 0
66 TestFunctional/serial/StartWithProxy 86.39
67 TestFunctional/serial/AuditLog 0
68 TestFunctional/serial/SoftStart 55.35
69 TestFunctional/serial/KubeContext 0.04
70 TestFunctional/serial/KubectlGetPods 0.09
73 TestFunctional/serial/CacheCmd/cache/add_remote 3.57
74 TestFunctional/serial/CacheCmd/cache/add_local 1.09
75 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
76 TestFunctional/serial/CacheCmd/cache/list 0.05
77 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
78 TestFunctional/serial/CacheCmd/cache/cache_reload 1.72
79 TestFunctional/serial/CacheCmd/cache/delete 0.1
80 TestFunctional/serial/MinikubeKubectlCmd 0.11
81 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
82 TestFunctional/serial/ExtraConfig 35.37
83 TestFunctional/serial/ComponentHealth 0.06
84 TestFunctional/serial/LogsCmd 1.4
85 TestFunctional/serial/LogsFileCmd 1.42
86 TestFunctional/serial/InvalidService 4.37
88 TestFunctional/parallel/ConfigCmd 0.38
89 TestFunctional/parallel/DashboardCmd 12.47
90 TestFunctional/parallel/DryRun 0.28
91 TestFunctional/parallel/InternationalLanguage 0.13
92 TestFunctional/parallel/StatusCmd 1.07
96 TestFunctional/parallel/ServiceCmdConnect 7.73
97 TestFunctional/parallel/AddonsCmd 0.15
98 TestFunctional/parallel/PersistentVolumeClaim 40.23
100 TestFunctional/parallel/SSHCmd 0.45
101 TestFunctional/parallel/CpCmd 1.49
102 TestFunctional/parallel/MySQL 26.94
103 TestFunctional/parallel/FileSync 0.24
104 TestFunctional/parallel/CertSync 1.49
108 TestFunctional/parallel/NodeLabels 0.07
110 TestFunctional/parallel/NonActiveRuntimeDisabled 0.52
112 TestFunctional/parallel/License 0.25
113 TestFunctional/parallel/ServiceCmd/DeployApp 12.2
114 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
115 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
116 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
127 TestFunctional/parallel/ProfileCmd/profile_list 0.35
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
129 TestFunctional/parallel/MountCmd/any-port 19.19
130 TestFunctional/parallel/ServiceCmd/List 0.38
131 TestFunctional/parallel/ServiceCmd/JSONOutput 0.36
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
133 TestFunctional/parallel/ServiceCmd/Format 0.33
134 TestFunctional/parallel/ServiceCmd/URL 0.38
135 TestFunctional/parallel/Version/short 0.05
136 TestFunctional/parallel/Version/components 0.65
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.35
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
141 TestFunctional/parallel/ImageCommands/ImageBuild 5.96
142 TestFunctional/parallel/ImageCommands/Setup 0.4
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.71
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.92
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.22
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.76
149 TestFunctional/parallel/MountCmd/specific-port 1.8
151 TestFunctional/parallel/MountCmd/VerifyCleanup 1.52
152 TestFunctional/delete_echo-server_images 0.04
153 TestFunctional/delete_my-image_image 0.02
154 TestFunctional/delete_minikube_cached_images 0.01
158 TestMultiControlPlane/serial/StartCluster 196.2
159 TestMultiControlPlane/serial/DeployApp 5.91
160 TestMultiControlPlane/serial/PingHostFromPods 1.17
161 TestMultiControlPlane/serial/AddWorkerNode 56.71
162 TestMultiControlPlane/serial/NodeLabels 0.07
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.85
164 TestMultiControlPlane/serial/CopyFile 13.06
170 TestMultiControlPlane/serial/DeleteSecondaryNode 16.9
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.62
173 TestMultiControlPlane/serial/RestartCluster 347.55
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.76
175 TestMultiControlPlane/serial/AddSecondaryNode 75.75
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.85
180 TestJSONOutput/start/Command 53.68
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 0.7
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 0.63
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 7.35
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.2
208 TestMainNoArgs 0.05
209 TestMinikubeProfile 86.01
212 TestMountStart/serial/StartWithMountFirst 27.97
213 TestMountStart/serial/VerifyMountFirst 0.37
214 TestMountStart/serial/StartWithMountSecond 28.39
215 TestMountStart/serial/VerifyMountSecond 0.37
216 TestMountStart/serial/DeleteFirst 0.66
217 TestMountStart/serial/VerifyMountPostDelete 0.37
218 TestMountStart/serial/Stop 1.27
219 TestMountStart/serial/RestartStopped 22.73
220 TestMountStart/serial/VerifyMountPostStop 0.37
223 TestMultiNode/serial/FreshStart2Nodes 111.52
224 TestMultiNode/serial/DeployApp2Nodes 5.01
225 TestMultiNode/serial/PingHostFrom2Pods 0.79
226 TestMultiNode/serial/AddNode 46.95
227 TestMultiNode/serial/MultiNodeLabels 0.06
228 TestMultiNode/serial/ProfileList 0.58
229 TestMultiNode/serial/CopyFile 7.15
230 TestMultiNode/serial/StopNode 2.31
231 TestMultiNode/serial/StartAfterStop 39.28
233 TestMultiNode/serial/DeleteNode 2.31
235 TestMultiNode/serial/RestartMultiNode 174.49
236 TestMultiNode/serial/ValidateNameConflict 49.77
243 TestScheduledStopUnix 115.85
247 TestRunningBinaryUpgrade 220.7
252 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
256 TestNoKubernetes/serial/StartWithK8s 96.06
261 TestNetworkPlugins/group/false 3
265 TestStoppedBinaryUpgrade/Setup 0.58
266 TestStoppedBinaryUpgrade/Upgrade 154.92
267 TestNoKubernetes/serial/StartWithStopK8s 62.51
268 TestNoKubernetes/serial/Start 29.97
269 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
270 TestNoKubernetes/serial/ProfileList 29.24
271 TestNoKubernetes/serial/Stop 1.3
272 TestNoKubernetes/serial/StartNoArgs 21.33
273 TestStoppedBinaryUpgrade/MinikubeLogs 0.94
274 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
283 TestPause/serial/Start 66.08
284 TestNetworkPlugins/group/auto/Start 97.19
285 TestPause/serial/SecondStartNoReconfiguration 51.74
286 TestPause/serial/Pause 0.72
287 TestPause/serial/VerifyStatus 0.25
288 TestPause/serial/Unpause 0.63
289 TestPause/serial/PauseAgain 0.8
290 TestPause/serial/DeletePaused 0.81
291 TestPause/serial/VerifyDeletedResources 0.65
292 TestNetworkPlugins/group/kindnet/Start 64.53
293 TestNetworkPlugins/group/auto/KubeletFlags 0.21
294 TestNetworkPlugins/group/auto/NetCatPod 11.22
295 TestNetworkPlugins/group/auto/DNS 0.17
296 TestNetworkPlugins/group/auto/Localhost 0.14
297 TestNetworkPlugins/group/auto/HairPin 0.14
298 TestNetworkPlugins/group/calico/Start 78.48
299 TestNetworkPlugins/group/custom-flannel/Start 85.7
300 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
301 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
302 TestNetworkPlugins/group/kindnet/NetCatPod 10.43
303 TestNetworkPlugins/group/kindnet/DNS 0.22
304 TestNetworkPlugins/group/kindnet/Localhost 0.21
305 TestNetworkPlugins/group/kindnet/HairPin 0.17
306 TestNetworkPlugins/group/enable-default-cni/Start 58.17
307 TestNetworkPlugins/group/calico/ControllerPod 6.01
308 TestNetworkPlugins/group/calico/KubeletFlags 0.21
309 TestNetworkPlugins/group/calico/NetCatPod 12.28
310 TestNetworkPlugins/group/flannel/Start 78.01
311 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
312 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.23
313 TestNetworkPlugins/group/calico/DNS 0.18
314 TestNetworkPlugins/group/calico/Localhost 0.13
315 TestNetworkPlugins/group/calico/HairPin 0.14
316 TestNetworkPlugins/group/custom-flannel/DNS 0.21
317 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
318 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
319 TestNetworkPlugins/group/bridge/Start 64.5
322 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.22
323 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.24
324 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
325 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
326 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
328 TestStartStop/group/no-preload/serial/FirstStart 83.42
329 TestNetworkPlugins/group/flannel/ControllerPod 6.01
330 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
331 TestNetworkPlugins/group/flannel/NetCatPod 12.27
332 TestNetworkPlugins/group/bridge/KubeletFlags 0.25
333 TestNetworkPlugins/group/bridge/NetCatPod 13.34
334 TestNetworkPlugins/group/flannel/DNS 0.17
335 TestNetworkPlugins/group/flannel/Localhost 0.14
336 TestNetworkPlugins/group/flannel/HairPin 0.14
337 TestNetworkPlugins/group/bridge/DNS 16.78
339 TestStartStop/group/embed-certs/serial/FirstStart 93.28
340 TestNetworkPlugins/group/bridge/Localhost 0.13
341 TestNetworkPlugins/group/bridge/HairPin 0.12
343 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 99.64
344 TestStartStop/group/no-preload/serial/DeployApp 8.32
345 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.04
347 TestStartStop/group/embed-certs/serial/DeployApp 10.29
348 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.02
350 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.27
351 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1
354 TestStartStop/group/no-preload/serial/SecondStart 686.01
358 TestStartStop/group/embed-certs/serial/SecondStart 561.92
360 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 517.74
361 TestStartStop/group/old-k8s-version/serial/Stop 4.3
362 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
373 TestStartStop/group/newest-cni/serial/FirstStart 47.09
374 TestStartStop/group/newest-cni/serial/DeployApp 0
375 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.04
376 TestStartStop/group/newest-cni/serial/Stop 7.32
377 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
378 TestStartStop/group/newest-cni/serial/SecondStart 36.24
379 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
380 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
381 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
382 TestStartStop/group/newest-cni/serial/Pause 2.37
x
+
TestDownloadOnly/v1.20.0/json-events (8.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-520840 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-520840 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (8.18409836s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1014 13:38:43.939074   15023 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1014 13:38:43.939173   15023 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-520840
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-520840: exit status 85 (60.989791ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-520840 | jenkins | v1.34.0 | 14 Oct 24 13:38 UTC |          |
	|         | -p download-only-520840        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 13:38:35
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 13:38:35.794529   15035 out.go:345] Setting OutFile to fd 1 ...
	I1014 13:38:35.794791   15035 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:38:35.794801   15035 out.go:358] Setting ErrFile to fd 2...
	I1014 13:38:35.794804   15035 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:38:35.795059   15035 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
	W1014 13:38:35.795249   15035 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19790-7836/.minikube/config/config.json: open /home/jenkins/minikube-integration/19790-7836/.minikube/config/config.json: no such file or directory
	I1014 13:38:35.795972   15035 out.go:352] Setting JSON to true
	I1014 13:38:35.797225   15035 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1266,"bootTime":1728911850,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 13:38:35.797326   15035 start.go:139] virtualization: kvm guest
	I1014 13:38:35.799883   15035 out.go:97] [download-only-520840] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1014 13:38:35.799970   15035 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball: no such file or directory
	I1014 13:38:35.800003   15035 notify.go:220] Checking for updates...
	I1014 13:38:35.801550   15035 out.go:169] MINIKUBE_LOCATION=19790
	I1014 13:38:35.803053   15035 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 13:38:35.804361   15035 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 13:38:35.805574   15035 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 13:38:35.806821   15035 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1014 13:38:35.809290   15035 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1014 13:38:35.809482   15035 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 13:38:35.909213   15035 out.go:97] Using the kvm2 driver based on user configuration
	I1014 13:38:35.909241   15035 start.go:297] selected driver: kvm2
	I1014 13:38:35.909246   15035 start.go:901] validating driver "kvm2" against <nil>
	I1014 13:38:35.909588   15035 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 13:38:35.909711   15035 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19790-7836/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 13:38:35.924809   15035 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1014 13:38:35.924855   15035 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 13:38:35.925368   15035 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1014 13:38:35.925531   15035 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1014 13:38:35.925561   15035 cni.go:84] Creating CNI manager for ""
	I1014 13:38:35.925607   15035 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 13:38:35.925612   15035 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1014 13:38:35.925657   15035 start.go:340] cluster config:
	{Name:download-only-520840 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-520840 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 13:38:35.925820   15035 iso.go:125] acquiring lock: {Name:mk2e2e780b05ead4007a93e6b56d28c6081926bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 13:38:35.927899   15035 out.go:97] Downloading VM boot image ...
	I1014 13:38:35.927961   15035 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19790-7836/.minikube/cache/iso/amd64/minikube-v1.34.0-1728382514-19774-amd64.iso
	I1014 13:38:38.691360   15035 out.go:97] Starting "download-only-520840" primary control-plane node in "download-only-520840" cluster
	I1014 13:38:38.691382   15035 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1014 13:38:38.736088   15035 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1014 13:38:38.736128   15035 cache.go:56] Caching tarball of preloaded images
	I1014 13:38:38.736315   15035 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1014 13:38:38.738457   15035 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1014 13:38:38.738479   15035 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1014 13:38:38.763272   15035 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-520840 host does not exist
	  To start a cluster, run: "minikube start -p download-only-520840"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-520840
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (5.88s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-882366 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-882366 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (5.884770285s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (5.88s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1014 13:38:50.143950   15023 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I1014 13:38:50.143990   15023 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-882366
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-882366: exit status 85 (58.73207ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-520840 | jenkins | v1.34.0 | 14 Oct 24 13:38 UTC |                     |
	|         | -p download-only-520840        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 14 Oct 24 13:38 UTC | 14 Oct 24 13:38 UTC |
	| delete  | -p download-only-520840        | download-only-520840 | jenkins | v1.34.0 | 14 Oct 24 13:38 UTC | 14 Oct 24 13:38 UTC |
	| start   | -o=json --download-only        | download-only-882366 | jenkins | v1.34.0 | 14 Oct 24 13:38 UTC |                     |
	|         | -p download-only-882366        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 13:38:44
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 13:38:44.299662   15240 out.go:345] Setting OutFile to fd 1 ...
	I1014 13:38:44.299762   15240 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:38:44.299769   15240 out.go:358] Setting ErrFile to fd 2...
	I1014 13:38:44.299773   15240 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:38:44.299946   15240 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
	I1014 13:38:44.300487   15240 out.go:352] Setting JSON to true
	I1014 13:38:44.301302   15240 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":1274,"bootTime":1728911850,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 13:38:44.301400   15240 start.go:139] virtualization: kvm guest
	I1014 13:38:44.303638   15240 out.go:97] [download-only-882366] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1014 13:38:44.303797   15240 notify.go:220] Checking for updates...
	I1014 13:38:44.305191   15240 out.go:169] MINIKUBE_LOCATION=19790
	I1014 13:38:44.306541   15240 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 13:38:44.307802   15240 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 13:38:44.309006   15240 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 13:38:44.310224   15240 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1014 13:38:44.312409   15240 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1014 13:38:44.312603   15240 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 13:38:44.344956   15240 out.go:97] Using the kvm2 driver based on user configuration
	I1014 13:38:44.344982   15240 start.go:297] selected driver: kvm2
	I1014 13:38:44.344988   15240 start.go:901] validating driver "kvm2" against <nil>
	I1014 13:38:44.345342   15240 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 13:38:44.345414   15240 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19790-7836/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1014 13:38:44.360380   15240 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1014 13:38:44.360427   15240 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 13:38:44.360901   15240 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1014 13:38:44.361032   15240 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1014 13:38:44.361057   15240 cni.go:84] Creating CNI manager for ""
	I1014 13:38:44.361101   15240 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1014 13:38:44.361111   15240 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1014 13:38:44.361156   15240 start.go:340] cluster config:
	{Name:download-only-882366 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-882366 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 13:38:44.361251   15240 iso.go:125] acquiring lock: {Name:mk2e2e780b05ead4007a93e6b56d28c6081926bb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 13:38:44.363096   15240 out.go:97] Starting "download-only-882366" primary control-plane node in "download-only-882366" cluster
	I1014 13:38:44.363120   15240 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 13:38:44.387538   15240 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1014 13:38:44.387604   15240 cache.go:56] Caching tarball of preloaded images
	I1014 13:38:44.387807   15240 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1014 13:38:44.389707   15240 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I1014 13:38:44.389731   15240 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I1014 13:38:44.413418   15240 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:aa79045e4550b9510ee496fee0d50abb -> /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1014 13:38:48.798246   15240 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I1014 13:38:48.798336   15240 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19790-7836/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-882366 host does not exist
	  To start a cluster, run: "minikube start -p download-only-882366"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-882366
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I1014 13:38:50.702527   15023 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-011047 --alsologtostderr --binary-mirror http://127.0.0.1:35043 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-011047" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-011047
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (56.96s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-190817 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-190817 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (55.908174136s)
helpers_test.go:175: Cleaning up "offline-crio-190817" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-190817
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-190817: (1.054379254s)
--- PASS: TestOffline (56.96s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:935: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-313496
addons_test.go:935: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-313496: exit status 85 (52.950802ms)

                                                
                                                
-- stdout --
	* Profile "addons-313496" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-313496"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:946: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-313496
addons_test.go:946: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-313496: exit status 85 (49.157126ms)

                                                
                                                
-- stdout --
	* Profile "addons-313496" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-313496"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (134.43s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-313496 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-313496 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m14.4306153s)
--- PASS: TestAddons/Setup (134.43s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-313496 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-313496 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/PullSecret (9.48s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/PullSecret
addons_test.go:614: (dbg) Run:  kubectl --context addons-313496 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-313496 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/PullSecret: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d0455ab0-9aab-459a-953a-f53376cb4884] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d0455ab0-9aab-459a-953a-f53376cb4884] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/PullSecret: integration-test=busybox healthy within 9.004497762s
addons_test.go:633: (dbg) Run:  kubectl --context addons-313496 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-313496 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-313496 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/PullSecret (9.48s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 2.498644ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-kxfcz" [a4d53217-34bc-44bb-8e30-d6b8914b6825] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.00314427s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-xsptb" [ed9b7051-496c-4b26-be7b-c8c2afd04b8e] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.006004026s
addons_test.go:331: (dbg) Run:  kubectl --context addons-313496 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-313496 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-313496 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.778702047s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-313496 ip
2024/10/14 13:41:48 [DEBUG] GET http://192.168.39.177:5000
addons_test.go:988: (dbg) Run:  out/minikube-linux-amd64 -p addons-313496 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.71s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.76s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-zpk6q" [b708cf45-5bb9-492a-9522-8882565da739] Running
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004227581s
addons_test.go:988: (dbg) Run:  out/minikube-linux-amd64 -p addons-313496 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-amd64 -p addons-313496 addons disable inspektor-gadget --alsologtostderr -v=1: (5.756975571s)
--- PASS: TestAddons/parallel/InspektorGadget (10.76s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.69s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1014 13:41:48.564784   15023 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1014 13:41:48.569472   15023 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1014 13:41:48.569491   15023 kapi.go:107] duration metric: took 4.722245ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 4.730374ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-313496 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313496 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313496 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313496 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313496 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313496 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313496 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313496 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313496 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313496 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313496 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313496 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313496 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313496 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313496 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313496 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313496 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-313496 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [db9d77df-e43b-4fbb-9591-78de70b7183d] Pending
helpers_test.go:344: "task-pv-pod" [db9d77df-e43b-4fbb-9591-78de70b7183d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [db9d77df-e43b-4fbb-9591-78de70b7183d] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003841457s
addons_test.go:511: (dbg) Run:  kubectl --context addons-313496 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-313496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-313496 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-313496 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-313496 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-313496 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313496 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-313496 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [21c8fe85-b2e2-43dc-b232-90ece4febe2e] Pending
helpers_test.go:344: "task-pv-pod-restore" [21c8fe85-b2e2-43dc-b232-90ece4febe2e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [21c8fe85-b2e2-43dc-b232-90ece4febe2e] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004196637s
addons_test.go:553: (dbg) Run:  kubectl --context addons-313496 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-313496 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-313496 delete volumesnapshot new-snapshot-demo
addons_test.go:988: (dbg) Run:  out/minikube-linux-amd64 -p addons-313496 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-amd64 -p addons-313496 addons disable volumesnapshots --alsologtostderr -v=1: (1.070004354s)
addons_test.go:988: (dbg) Run:  out/minikube-linux-amd64 -p addons-313496 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-amd64 -p addons-313496 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.856059352s)
--- PASS: TestAddons/parallel/CSI (52.69s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:743: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-313496 --alsologtostderr -v=1
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-5fv7p" [8d8dc810-f30f-448c-aca9-08dbd1581ad0] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-5fv7p" [8d8dc810-f30f-448c-aca9-08dbd1581ad0] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-5fv7p" [8d8dc810-f30f-448c-aca9-08dbd1581ad0] Running
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.01452188s
addons_test.go:988: (dbg) Run:  out/minikube-linux-amd64 -p addons-313496 addons disable headlamp --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-amd64 -p addons-313496 addons disable headlamp --alsologtostderr -v=1: (6.161304756s)
--- PASS: TestAddons/parallel/Headlamp (18.07s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-gc64q" [9d842589-2e2c-4f40-9521-09982053dfef] Running
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003738075s
addons_test.go:988: (dbg) Run:  out/minikube-linux-amd64 -p addons-313496 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.12s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:884: (dbg) Run:  kubectl --context addons-313496 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:890: (dbg) Run:  kubectl --context addons-313496 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:894: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313496 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313496 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313496 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313496 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313496 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-313496 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:897: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [ea45518e-bed9-4277-8a25-3db38d803de3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [ea45518e-bed9-4277-8a25-3db38d803de3] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [ea45518e-bed9-4277-8a25-3db38d803de3] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:897: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003941947s
addons_test.go:902: (dbg) Run:  kubectl --context addons-313496 get pvc test-pvc -o=json
addons_test.go:911: (dbg) Run:  out/minikube-linux-amd64 -p addons-313496 ssh "cat /opt/local-path-provisioner/pvc-c19f89aa-af99-4f45-994e-6760df4750a7_default_test-pvc/file1"
addons_test.go:923: (dbg) Run:  kubectl --context addons-313496 delete pod test-local-path
addons_test.go:927: (dbg) Run:  kubectl --context addons-313496 delete pvc test-pvc
addons_test.go:988: (dbg) Run:  out/minikube-linux-amd64 -p addons-313496 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-amd64 -p addons-313496 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.283239756s)
--- PASS: TestAddons/parallel/LocalPath (52.12s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.6s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:960: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-kkmfm" [846014ef-c2c5-47a1-b0ae-3e582a248ee6] Running
addons_test.go:960: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003539214s
addons_test.go:988: (dbg) Run:  out/minikube-linux-amd64 -p addons-313496 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.60s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:982: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-r297f" [6ad73659-1885-4f37-8a9b-ef8c3127fb92] Running
addons_test.go:982: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003375738s
addons_test.go:988: (dbg) Run:  out/minikube-linux-amd64 -p addons-313496 addons disable yakd --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-amd64 -p addons-313496 addons disable yakd --alsologtostderr -v=1: (5.860156869s)
--- PASS: TestAddons/parallel/Yakd (11.86s)

                                                
                                    
x
+
TestCertOptions (98.02s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-914285 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-914285 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m36.765943045s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-914285 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-914285 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-914285 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-914285" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-914285
--- PASS: TestCertOptions (98.02s)

                                                
                                    
x
+
TestCertExpiration (319.55s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-750530 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-750530 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (57.012584699s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-750530 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-750530 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m21.670185109s)
helpers_test.go:175: Cleaning up "cert-expiration-750530" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-750530
--- PASS: TestCertExpiration (319.55s)

                                                
                                    
x
+
TestForceSystemdFlag (81.6s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-273294 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1014 14:46:06.400604   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-273294 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m20.381314643s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-273294 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-273294" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-273294
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-273294: (1.020244518s)
--- PASS: TestForceSystemdFlag (81.60s)

                                                
                                    
x
+
TestForceSystemdEnv (72.53s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-338682 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-338682 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m11.533325414s)
helpers_test.go:175: Cleaning up "force-systemd-env-338682" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-338682
--- PASS: TestForceSystemdEnv (72.53s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.23s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1014 14:42:19.573543   15023 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1014 14:42:19.573708   15023 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1014 14:42:19.601063   15023 install.go:62] docker-machine-driver-kvm2: exit status 1
W1014 14:42:19.601377   15023 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1014 14:42:19.601448   15023 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3764489731/001/docker-machine-driver-kvm2
I1014 14:42:19.760158   15023 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3764489731/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x52fcce0 0x52fcce0 0x52fcce0 0x52fcce0 0x52fcce0 0x52fcce0 0x52fcce0] Decompressors:map[bz2:0xc000014f80 gz:0xc000014f88 tar:0xc000014ee0 tar.bz2:0xc000014ef0 tar.gz:0xc000014f10 tar.xz:0xc000014f40 tar.zst:0xc000014f70 tbz2:0xc000014ef0 tgz:0xc000014f10 txz:0xc000014f40 tzst:0xc000014f70 xz:0xc000014f90 zip:0xc000014fd0 zst:0xc000014f98] Getters:map[file:0xc001d64f00 http:0xc00083a500 https:0xc00083a550] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1014 14:42:19.760197   15023 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3764489731/001/docker-machine-driver-kvm2
I1014 14:42:20.354812   15023 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1014 14:42:20.354909   15023 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1014 14:42:20.382944   15023 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1014 14:42:20.382985   15023 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1014 14:42:20.383046   15023 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1014 14:42:20.383077   15023 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3764489731/002/docker-machine-driver-kvm2
I1014 14:42:20.406837   15023 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3764489731/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x52fcce0 0x52fcce0 0x52fcce0 0x52fcce0 0x52fcce0 0x52fcce0 0x52fcce0] Decompressors:map[bz2:0xc000014f80 gz:0xc000014f88 tar:0xc000014ee0 tar.bz2:0xc000014ef0 tar.gz:0xc000014f10 tar.xz:0xc000014f40 tar.zst:0xc000014f70 tbz2:0xc000014ef0 tgz:0xc000014f10 txz:0xc000014f40 tzst:0xc000014f70 xz:0xc000014f90 zip:0xc000014fd0 zst:0xc000014f98] Getters:map[file:0xc0016f81d0 http:0xc001db4280 https:0xc001db42d0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1014 14:42:20.406892   15023 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3764489731/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (1.23s)

                                                
                                    
x
+
TestErrorSpam/setup (44.51s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-425739 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-425739 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-425739 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-425739 --driver=kvm2  --container-runtime=crio: (44.509956533s)
--- PASS: TestErrorSpam/setup (44.51s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425739 --log_dir /tmp/nospam-425739 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425739 --log_dir /tmp/nospam-425739 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425739 --log_dir /tmp/nospam-425739 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425739 --log_dir /tmp/nospam-425739 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425739 --log_dir /tmp/nospam-425739 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425739 --log_dir /tmp/nospam-425739 status
--- PASS: TestErrorSpam/status (0.74s)

                                                
                                    
x
+
TestErrorSpam/pause (1.54s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425739 --log_dir /tmp/nospam-425739 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425739 --log_dir /tmp/nospam-425739 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425739 --log_dir /tmp/nospam-425739 pause
--- PASS: TestErrorSpam/pause (1.54s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.74s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425739 --log_dir /tmp/nospam-425739 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425739 --log_dir /tmp/nospam-425739 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425739 --log_dir /tmp/nospam-425739 unpause
--- PASS: TestErrorSpam/unpause (1.74s)

                                                
                                    
x
+
TestErrorSpam/stop (6.12s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425739 --log_dir /tmp/nospam-425739 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-425739 --log_dir /tmp/nospam-425739 stop: (2.313453844s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425739 --log_dir /tmp/nospam-425739 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-425739 --log_dir /tmp/nospam-425739 stop: (1.792043282s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-425739 --log_dir /tmp/nospam-425739 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-425739 --log_dir /tmp/nospam-425739 stop: (2.013289487s)
--- PASS: TestErrorSpam/stop (6.12s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19790-7836/.minikube/files/etc/test/nested/copy/15023/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (86.39s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-917108 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1014 13:51:06.400769   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:51:06.407207   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:51:06.418653   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:51:06.440080   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:51:06.481533   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:51:06.562978   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:51:06.724470   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:51:07.046146   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:51:07.687834   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:51:08.969287   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:51:11.532073   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:51:16.653883   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:51:26.896038   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:51:47.377794   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-917108 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m26.388856005s)
--- PASS: TestFunctional/serial/StartWithProxy (86.39s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (55.35s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1014 13:51:51.658387   15023 config.go:182] Loaded profile config "functional-917108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-917108 --alsologtostderr -v=8
E1014 13:52:28.339692   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-917108 --alsologtostderr -v=8: (55.348491071s)
functional_test.go:663: soft start took 55.349104083s for "functional-917108" cluster.
I1014 13:52:47.007189   15023 config.go:182] Loaded profile config "functional-917108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (55.35s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-917108 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-917108 cache add registry.k8s.io/pause:3.1: (1.224786135s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-917108 cache add registry.k8s.io/pause:3.3: (1.220580504s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-917108 cache add registry.k8s.io/pause:latest: (1.128844886s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-917108 /tmp/TestFunctionalserialCacheCmdcacheadd_local3697390234/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 cache add minikube-local-cache-test:functional-917108
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 cache delete minikube-local-cache-test:functional-917108
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-917108
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-917108 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (208.182022ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-917108 cache reload: (1.026543085s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 kubectl -- --context functional-917108 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-917108 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.37s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-917108 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-917108 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.373142899s)
functional_test.go:761: restart took 35.373283377s for "functional-917108" cluster.
I1014 13:53:29.534720   15023 config.go:182] Loaded profile config "functional-917108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (35.37s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-917108 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-917108 logs: (1.398250311s)
--- PASS: TestFunctional/serial/LogsCmd (1.40s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 logs --file /tmp/TestFunctionalserialLogsFileCmd730738420/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-917108 logs --file /tmp/TestFunctionalserialLogsFileCmd730738420/001/logs.txt: (1.421010196s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.37s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-917108 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-917108
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-917108: exit status 115 (274.950869ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.149:30673 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-917108 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.37s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-917108 config get cpus: exit status 14 (62.572234ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-917108 config get cpus: exit status 14 (53.480502ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-917108 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-917108 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 24723: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.47s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-917108 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-917108 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (142.129673ms)

                                                
                                                
-- stdout --
	* [functional-917108] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19790-7836/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-7836/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 13:54:01.775627   24416 out.go:345] Setting OutFile to fd 1 ...
	I1014 13:54:01.775723   24416 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:54:01.775727   24416 out.go:358] Setting ErrFile to fd 2...
	I1014 13:54:01.775731   24416 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:54:01.775905   24416 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
	I1014 13:54:01.776432   24416 out.go:352] Setting JSON to false
	I1014 13:54:01.777307   24416 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2192,"bootTime":1728911850,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 13:54:01.777406   24416 start.go:139] virtualization: kvm guest
	I1014 13:54:01.779500   24416 out.go:177] * [functional-917108] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1014 13:54:01.780928   24416 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 13:54:01.780932   24416 notify.go:220] Checking for updates...
	I1014 13:54:01.782460   24416 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 13:54:01.783795   24416 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 13:54:01.784977   24416 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 13:54:01.786116   24416 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 13:54:01.787672   24416 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 13:54:01.789662   24416 config.go:182] Loaded profile config "functional-917108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:54:01.790261   24416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:54:01.790350   24416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:54:01.805415   24416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39395
	I1014 13:54:01.805857   24416 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:54:01.806393   24416 main.go:141] libmachine: Using API Version  1
	I1014 13:54:01.806427   24416 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:54:01.806822   24416 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:54:01.807031   24416 main.go:141] libmachine: (functional-917108) Calling .DriverName
	I1014 13:54:01.807332   24416 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 13:54:01.807762   24416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:54:01.807811   24416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:54:01.823865   24416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38353
	I1014 13:54:01.824334   24416 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:54:01.824817   24416 main.go:141] libmachine: Using API Version  1
	I1014 13:54:01.824836   24416 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:54:01.825129   24416 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:54:01.825305   24416 main.go:141] libmachine: (functional-917108) Calling .DriverName
	I1014 13:54:01.857570   24416 out.go:177] * Using the kvm2 driver based on existing profile
	I1014 13:54:01.858857   24416 start.go:297] selected driver: kvm2
	I1014 13:54:01.858872   24416 start.go:901] validating driver "kvm2" against &{Name:functional-917108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-917108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 13:54:01.858966   24416 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 13:54:01.861052   24416 out.go:201] 
	W1014 13:54:01.862174   24416 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1014 13:54:01.863276   24416 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-917108 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-917108 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-917108 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (134.145832ms)

                                                
                                                
-- stdout --
	* [functional-917108] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19790-7836/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-7836/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 13:54:01.634505   24388 out.go:345] Setting OutFile to fd 1 ...
	I1014 13:54:01.634754   24388 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:54:01.634764   24388 out.go:358] Setting ErrFile to fd 2...
	I1014 13:54:01.634769   24388 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:54:01.635024   24388 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
	I1014 13:54:01.635531   24388 out.go:352] Setting JSON to false
	I1014 13:54:01.636381   24388 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":2192,"bootTime":1728911850,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 13:54:01.636476   24388 start.go:139] virtualization: kvm guest
	I1014 13:54:01.638784   24388 out.go:177] * [functional-917108] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1014 13:54:01.640706   24388 notify.go:220] Checking for updates...
	I1014 13:54:01.640727   24388 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 13:54:01.642197   24388 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 13:54:01.643472   24388 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 13:54:01.644763   24388 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 13:54:01.645923   24388 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 13:54:01.647146   24388 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 13:54:01.648791   24388 config.go:182] Loaded profile config "functional-917108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 13:54:01.649224   24388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:54:01.649272   24388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:54:01.664166   24388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39459
	I1014 13:54:01.664675   24388 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:54:01.665248   24388 main.go:141] libmachine: Using API Version  1
	I1014 13:54:01.665271   24388 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:54:01.665583   24388 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:54:01.665758   24388 main.go:141] libmachine: (functional-917108) Calling .DriverName
	I1014 13:54:01.665960   24388 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 13:54:01.666287   24388 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 13:54:01.666340   24388 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 13:54:01.681547   24388 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46023
	I1014 13:54:01.682067   24388 main.go:141] libmachine: () Calling .GetVersion
	I1014 13:54:01.682622   24388 main.go:141] libmachine: Using API Version  1
	I1014 13:54:01.682678   24388 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 13:54:01.683043   24388 main.go:141] libmachine: () Calling .GetMachineName
	I1014 13:54:01.683243   24388 main.go:141] libmachine: (functional-917108) Calling .DriverName
	I1014 13:54:01.714926   24388 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1014 13:54:01.716252   24388 start.go:297] selected driver: kvm2
	I1014 13:54:01.716265   24388 start.go:901] validating driver "kvm2" against &{Name:functional-917108 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:functional-917108 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.149 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 13:54:01.716403   24388 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 13:54:01.718729   24388 out.go:201] 
	W1014 13:54:01.719904   24388 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1014 13:54:01.721012   24388 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-917108 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-917108 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-jtkfl" [a2553111-66f0-4098-8754-88f087002b75] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-jtkfl" [a2553111-66f0-4098-8754-88f087002b75] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.006491722s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.149:31893
functional_test.go:1675: http://192.168.39.149:31893: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-jtkfl

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.149:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.149:31893
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.73s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (40.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [28df383c-8148-4640-9ff1-62b11f4b6abc] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003825303s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-917108 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-917108 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-917108 get pvc myclaim -o=json
I1014 13:53:44.675739   15023 retry.go:31] will retry after 2.954572742s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:9575752e-a791-4d86-b536-207e4d9e9d06 ResourceVersion:787 Generation:0 CreationTimestamp:2024-10-14 13:53:44 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc00093e730 VolumeMode:0xc00093e770 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-917108 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-917108 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [fb554d95-e8db-4826-8f0d-38bc46ddf63b] Pending
helpers_test.go:344: "sp-pod" [fb554d95-e8db-4826-8f0d-38bc46ddf63b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [fb554d95-e8db-4826-8f0d-38bc46ddf63b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.005338438s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-917108 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-917108 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-917108 delete -f testdata/storage-provisioner/pod.yaml: (2.0739s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-917108 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c135b791-ea10-4b10-9d72-639175bf68c8] Pending
helpers_test.go:344: "sp-pod" [c135b791-ea10-4b10-9d72-639175bf68c8] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c135b791-ea10-4b10-9d72-639175bf68c8] Running
2024/10/14 13:54:14 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00345959s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-917108 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (40.23s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 ssh -n functional-917108 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 cp functional-917108:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd397528247/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 ssh -n functional-917108 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 ssh -n functional-917108 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-917108 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-7zgvf" [7c648131-e2ab-4aa7-8b2e-822791ef7d16] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-7zgvf" [7c648131-e2ab-4aa7-8b2e-822791ef7d16] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.26195435s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-917108 exec mysql-6cdb49bbb-7zgvf -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-917108 exec mysql-6cdb49bbb-7zgvf -- mysql -ppassword -e "show databases;": exit status 1 (353.141899ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1014 13:53:59.227981   15023 retry.go:31] will retry after 782.099351ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-917108 exec mysql-6cdb49bbb-7zgvf -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-917108 exec mysql-6cdb49bbb-7zgvf -- mysql -ppassword -e "show databases;": exit status 1 (339.490057ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1014 13:54:00.350292   15023 retry.go:31] will retry after 1.156477862s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-917108 exec mysql-6cdb49bbb-7zgvf -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-917108 exec mysql-6cdb49bbb-7zgvf -- mysql -ppassword -e "show databases;": exit status 1 (236.250212ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1014 13:54:01.743469   15023 retry.go:31] will retry after 2.464464896s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-917108 exec mysql-6cdb49bbb-7zgvf -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.94s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/15023/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 ssh "sudo cat /etc/test/nested/copy/15023/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/15023.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 ssh "sudo cat /etc/ssl/certs/15023.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/15023.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 ssh "sudo cat /usr/share/ca-certificates/15023.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/150232.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 ssh "sudo cat /etc/ssl/certs/150232.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/150232.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 ssh "sudo cat /usr/share/ca-certificates/150232.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-917108 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-917108 ssh "sudo systemctl is-active docker": exit status 1 (271.745326ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-917108 ssh "sudo systemctl is-active containerd": exit status 1 (249.839639ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-917108 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-917108 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-mst94" [db73cbdd-7cd7-4867-bc94-33c6be60baa1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-mst94" [db73cbdd-7cd7-4867-bc94-33c6be60baa1] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.003825567s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "297.445555ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "48.916202ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "325.921699ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "45.711571ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (19.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-917108 /tmp/TestFunctionalparallelMountCmdany-port1148477327/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1728914021216450138" to /tmp/TestFunctionalparallelMountCmdany-port1148477327/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1728914021216450138" to /tmp/TestFunctionalparallelMountCmdany-port1148477327/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1728914021216450138" to /tmp/TestFunctionalparallelMountCmdany-port1148477327/001/test-1728914021216450138
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-917108 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (220.310293ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1014 13:53:41.437090   15023 retry.go:31] will retry after 734.997701ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 14 13:53 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 14 13:53 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 14 13:53 test-1728914021216450138
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 ssh cat /mount-9p/test-1728914021216450138
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-917108 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [c70c9d16-c583-415b-bd76-0ea76e0fe829] Pending
helpers_test.go:344: "busybox-mount" [c70c9d16-c583-415b-bd76-0ea76e0fe829] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [c70c9d16-c583-415b-bd76-0ea76e0fe829] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [c70c9d16-c583-415b-bd76-0ea76e0fe829] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 16.008217382s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-917108 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-917108 /tmp/TestFunctionalparallelMountCmdany-port1148477327/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (19.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 service list -o json
functional_test.go:1494: Took "361.217485ms" to run "out/minikube-linux-amd64 -p functional-917108 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.149:31337
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 service hello-node --url --format={{.IP}}
E1014 13:53:50.261048   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.149:31337
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-917108 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-917108
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-917108 image ls --format short --alsologtostderr:
I1014 13:54:05.384436   24905 out.go:345] Setting OutFile to fd 1 ...
I1014 13:54:05.384531   24905 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 13:54:05.384538   24905 out.go:358] Setting ErrFile to fd 2...
I1014 13:54:05.384543   24905 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 13:54:05.384708   24905 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
I1014 13:54:05.385257   24905 config.go:182] Loaded profile config "functional-917108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1014 13:54:05.385354   24905 config.go:182] Loaded profile config "functional-917108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1014 13:54:05.385708   24905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1014 13:54:05.385748   24905 main.go:141] libmachine: Launching plugin server for driver kvm2
I1014 13:54:05.400796   24905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45027
I1014 13:54:05.401316   24905 main.go:141] libmachine: () Calling .GetVersion
I1014 13:54:05.401859   24905 main.go:141] libmachine: Using API Version  1
I1014 13:54:05.401883   24905 main.go:141] libmachine: () Calling .SetConfigRaw
I1014 13:54:05.402303   24905 main.go:141] libmachine: () Calling .GetMachineName
I1014 13:54:05.402484   24905 main.go:141] libmachine: (functional-917108) Calling .GetState
I1014 13:54:05.404783   24905 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1014 13:54:05.404825   24905 main.go:141] libmachine: Launching plugin server for driver kvm2
I1014 13:54:05.419373   24905 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44157
I1014 13:54:05.419844   24905 main.go:141] libmachine: () Calling .GetVersion
I1014 13:54:05.420385   24905 main.go:141] libmachine: Using API Version  1
I1014 13:54:05.420425   24905 main.go:141] libmachine: () Calling .SetConfigRaw
I1014 13:54:05.420754   24905 main.go:141] libmachine: () Calling .GetMachineName
I1014 13:54:05.420925   24905 main.go:141] libmachine: (functional-917108) Calling .DriverName
I1014 13:54:05.421118   24905 ssh_runner.go:195] Run: systemctl --version
I1014 13:54:05.421141   24905 main.go:141] libmachine: (functional-917108) Calling .GetSSHHostname
I1014 13:54:05.423840   24905 main.go:141] libmachine: (functional-917108) DBG | domain functional-917108 has defined MAC address 52:54:00:8b:bc:da in network mk-functional-917108
I1014 13:54:05.424228   24905 main.go:141] libmachine: (functional-917108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:bc:da", ip: ""} in network mk-functional-917108: {Iface:virbr1 ExpiryTime:2024-10-14 14:50:40 +0000 UTC Type:0 Mac:52:54:00:8b:bc:da Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:functional-917108 Clientid:01:52:54:00:8b:bc:da}
I1014 13:54:05.424259   24905 main.go:141] libmachine: (functional-917108) DBG | domain functional-917108 has defined IP address 192.168.39.149 and MAC address 52:54:00:8b:bc:da in network mk-functional-917108
I1014 13:54:05.424405   24905 main.go:141] libmachine: (functional-917108) Calling .GetSSHPort
I1014 13:54:05.424619   24905 main.go:141] libmachine: (functional-917108) Calling .GetSSHKeyPath
I1014 13:54:05.424780   24905 main.go:141] libmachine: (functional-917108) Calling .GetSSHUsername
I1014 13:54:05.425032   24905 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/functional-917108/id_rsa Username:docker}
I1014 13:54:05.564797   24905 ssh_runner.go:195] Run: sudo crictl images --output json
I1014 13:54:05.687930   24905 main.go:141] libmachine: Making call to close driver server
I1014 13:54:05.687947   24905 main.go:141] libmachine: (functional-917108) Calling .Close
I1014 13:54:05.688205   24905 main.go:141] libmachine: Successfully made call to close driver server
I1014 13:54:05.688222   24905 main.go:141] libmachine: Making call to close connection to plugin binary
I1014 13:54:05.688237   24905 main.go:141] libmachine: Making call to close driver server
I1014 13:54:05.688245   24905 main.go:141] libmachine: (functional-917108) Calling .Close
I1014 13:54:05.688452   24905 main.go:141] libmachine: Successfully made call to close driver server
I1014 13:54:05.688467   24905 main.go:141] libmachine: Making call to close connection to plugin binary
I1014 13:54:05.688483   24905 main.go:141] libmachine: (functional-917108) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-917108 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| localhost/minikube-local-cache-test     | functional-917108  | d8050b8928c57 | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
| localhost/my-image                      | functional-917108  | f9a812a6a356a | 1.47MB |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
| docker.io/library/nginx                 | latest             | 7f553e8bbc897 | 196MB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-917108 image ls --format table --alsologtostderr:
I1014 13:54:11.832269   25136 out.go:345] Setting OutFile to fd 1 ...
I1014 13:54:11.832519   25136 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 13:54:11.832529   25136 out.go:358] Setting ErrFile to fd 2...
I1014 13:54:11.832533   25136 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 13:54:11.832745   25136 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
I1014 13:54:11.833367   25136 config.go:182] Loaded profile config "functional-917108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1014 13:54:11.833475   25136 config.go:182] Loaded profile config "functional-917108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1014 13:54:11.833847   25136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1014 13:54:11.833895   25136 main.go:141] libmachine: Launching plugin server for driver kvm2
I1014 13:54:11.848957   25136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46259
I1014 13:54:11.849514   25136 main.go:141] libmachine: () Calling .GetVersion
I1014 13:54:11.850136   25136 main.go:141] libmachine: Using API Version  1
I1014 13:54:11.850155   25136 main.go:141] libmachine: () Calling .SetConfigRaw
I1014 13:54:11.850515   25136 main.go:141] libmachine: () Calling .GetMachineName
I1014 13:54:11.850730   25136 main.go:141] libmachine: (functional-917108) Calling .GetState
I1014 13:54:11.852352   25136 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1014 13:54:11.852403   25136 main.go:141] libmachine: Launching plugin server for driver kvm2
I1014 13:54:11.867471   25136 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43903
I1014 13:54:11.867926   25136 main.go:141] libmachine: () Calling .GetVersion
I1014 13:54:11.868565   25136 main.go:141] libmachine: Using API Version  1
I1014 13:54:11.868598   25136 main.go:141] libmachine: () Calling .SetConfigRaw
I1014 13:54:11.868893   25136 main.go:141] libmachine: () Calling .GetMachineName
I1014 13:54:11.869087   25136 main.go:141] libmachine: (functional-917108) Calling .DriverName
I1014 13:54:11.869295   25136 ssh_runner.go:195] Run: systemctl --version
I1014 13:54:11.869329   25136 main.go:141] libmachine: (functional-917108) Calling .GetSSHHostname
I1014 13:54:11.872283   25136 main.go:141] libmachine: (functional-917108) DBG | domain functional-917108 has defined MAC address 52:54:00:8b:bc:da in network mk-functional-917108
I1014 13:54:11.872688   25136 main.go:141] libmachine: (functional-917108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:bc:da", ip: ""} in network mk-functional-917108: {Iface:virbr1 ExpiryTime:2024-10-14 14:50:40 +0000 UTC Type:0 Mac:52:54:00:8b:bc:da Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:functional-917108 Clientid:01:52:54:00:8b:bc:da}
I1014 13:54:11.872726   25136 main.go:141] libmachine: (functional-917108) DBG | domain functional-917108 has defined IP address 192.168.39.149 and MAC address 52:54:00:8b:bc:da in network mk-functional-917108
I1014 13:54:11.872871   25136 main.go:141] libmachine: (functional-917108) Calling .GetSSHPort
I1014 13:54:11.873046   25136 main.go:141] libmachine: (functional-917108) Calling .GetSSHKeyPath
I1014 13:54:11.873223   25136 main.go:141] libmachine: (functional-917108) Calling .GetSSHUsername
I1014 13:54:11.873368   25136 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/functional-917108/id_rsa Username:docker}
I1014 13:54:11.979047   25136 ssh_runner.go:195] Run: sudo crictl images --output json
I1014 13:54:12.027330   25136 main.go:141] libmachine: Making call to close driver server
I1014 13:54:12.027343   25136 main.go:141] libmachine: (functional-917108) Calling .Close
I1014 13:54:12.027565   25136 main.go:141] libmachine: Successfully made call to close driver server
I1014 13:54:12.027585   25136 main.go:141] libmachine: Making call to close connection to plugin binary
I1014 13:54:12.027586   25136 main.go:141] libmachine: (functional-917108) DBG | Closing plugin on server side
I1014 13:54:12.027594   25136 main.go:141] libmachine: Making call to close driver server
I1014 13:54:12.027601   25136 main.go:141] libmachine: (functional-917108) Calling .Close
I1014 13:54:12.027791   25136 main.go:141] libmachine: Successfully made call to close driver server
I1014 13:54:12.027804   25136 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-917108 image ls --format json --alsologtostderr:
[{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0","registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"68420934"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee04
15a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisio
ner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"d8050b8928c57f3e3795f425d9bc2be7437a0fc1b88713db9c4230e9f383f01b","repoDigests":["localhost/minikube-local-cache-test@sha256:e8be6afd3c1f2202dd55f719a782e8264152e9d79e7986184caafa5547f03994"],"repoTags":["localhost/minikube-local-cache-test:functional-917108"],"size":"3330"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417
263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43
e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"7f553e8bbc897571642d836b31eaf6ecbe395d7641c2b24291356ed28f3f2bd0","repoDigests":["docker.io/library/nginx@sha256:396c6e925f28fbbed95a475d27c18886289c2bbc53231534dc86c163558b5e4b","docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0"],"repoTags":["docker.io/library/nginx:latest"],"size":"195818028"},{"id":"f9a812a6a356a8f6fe361cebb1bfbe00e227e8347db8699f31fc7682999d3c71","repoDigests":["localhost/my-image@sha256:20995bb0bb227f5480e0db7bbd2cd4d8ba85b608fcc6b963
2ae69f8fdd6755f3"],"repoTags":["localhost/my-image:functional-917108"],"size":"1468600"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fd
f2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"92733849"},{"id":"1f284ae02ee88820fbc64b19fa8c460e3e23a6c38c403e9a06a4191868977a62","repoDigests":["docker.io/library/ebdb12a7c15cb91a805b33f3dd6c045ae4f13551fadb035a160cf42dba1c6fee-tmp@sha256:3d8abf9cbf0dd88b549b0669e986c55e441f45ecfbeceec5d251738ca7612fe0"],"repoTags":[],"size":"1466016"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1b
ea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-917108 image ls --format json --alsologtostderr:
I1014 13:54:11.571891   25089 out.go:345] Setting OutFile to fd 1 ...
I1014 13:54:11.572167   25089 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 13:54:11.572175   25089 out.go:358] Setting ErrFile to fd 2...
I1014 13:54:11.572180   25089 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 13:54:11.572461   25089 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
I1014 13:54:11.573058   25089 config.go:182] Loaded profile config "functional-917108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1014 13:54:11.573179   25089 config.go:182] Loaded profile config "functional-917108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1014 13:54:11.573575   25089 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1014 13:54:11.573631   25089 main.go:141] libmachine: Launching plugin server for driver kvm2
I1014 13:54:11.588882   25089 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41913
I1014 13:54:11.589434   25089 main.go:141] libmachine: () Calling .GetVersion
I1014 13:54:11.590028   25089 main.go:141] libmachine: Using API Version  1
I1014 13:54:11.590055   25089 main.go:141] libmachine: () Calling .SetConfigRaw
I1014 13:54:11.590704   25089 main.go:141] libmachine: () Calling .GetMachineName
I1014 13:54:11.591519   25089 main.go:141] libmachine: (functional-917108) Calling .GetState
I1014 13:54:11.593482   25089 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1014 13:54:11.593528   25089 main.go:141] libmachine: Launching plugin server for driver kvm2
I1014 13:54:11.608381   25089 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35135
I1014 13:54:11.608847   25089 main.go:141] libmachine: () Calling .GetVersion
I1014 13:54:11.609332   25089 main.go:141] libmachine: Using API Version  1
I1014 13:54:11.609348   25089 main.go:141] libmachine: () Calling .SetConfigRaw
I1014 13:54:11.609706   25089 main.go:141] libmachine: () Calling .GetMachineName
I1014 13:54:11.609893   25089 main.go:141] libmachine: (functional-917108) Calling .DriverName
I1014 13:54:11.610074   25089 ssh_runner.go:195] Run: systemctl --version
I1014 13:54:11.610096   25089 main.go:141] libmachine: (functional-917108) Calling .GetSSHHostname
I1014 13:54:11.613017   25089 main.go:141] libmachine: (functional-917108) DBG | domain functional-917108 has defined MAC address 52:54:00:8b:bc:da in network mk-functional-917108
I1014 13:54:11.613368   25089 main.go:141] libmachine: (functional-917108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:bc:da", ip: ""} in network mk-functional-917108: {Iface:virbr1 ExpiryTime:2024-10-14 14:50:40 +0000 UTC Type:0 Mac:52:54:00:8b:bc:da Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:functional-917108 Clientid:01:52:54:00:8b:bc:da}
I1014 13:54:11.613399   25089 main.go:141] libmachine: (functional-917108) DBG | domain functional-917108 has defined IP address 192.168.39.149 and MAC address 52:54:00:8b:bc:da in network mk-functional-917108
I1014 13:54:11.613594   25089 main.go:141] libmachine: (functional-917108) Calling .GetSSHPort
I1014 13:54:11.613791   25089 main.go:141] libmachine: (functional-917108) Calling .GetSSHKeyPath
I1014 13:54:11.613941   25089 main.go:141] libmachine: (functional-917108) Calling .GetSSHUsername
I1014 13:54:11.614100   25089 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/functional-917108/id_rsa Username:docker}
I1014 13:54:11.700806   25089 ssh_runner.go:195] Run: sudo crictl images --output json
I1014 13:54:11.777021   25089 main.go:141] libmachine: Making call to close driver server
I1014 13:54:11.777032   25089 main.go:141] libmachine: (functional-917108) Calling .Close
I1014 13:54:11.777294   25089 main.go:141] libmachine: Successfully made call to close driver server
I1014 13:54:11.777311   25089 main.go:141] libmachine: Making call to close connection to plugin binary
I1014 13:54:11.777360   25089 main.go:141] libmachine: (functional-917108) DBG | Closing plugin on server side
I1014 13:54:11.777416   25089 main.go:141] libmachine: Making call to close driver server
I1014 13:54:11.777434   25089 main.go:141] libmachine: (functional-917108) Calling .Close
I1014 13:54:11.777700   25089 main.go:141] libmachine: Successfully made call to close driver server
I1014 13:54:11.777713   25089 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-917108 image ls --format yaml --alsologtostderr:
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: d8050b8928c57f3e3795f425d9bc2be7437a0fc1b88713db9c4230e9f383f01b
repoDigests:
- localhost/minikube-local-cache-test@sha256:e8be6afd3c1f2202dd55f719a782e8264152e9d79e7986184caafa5547f03994
repoTags:
- localhost/minikube-local-cache-test:functional-917108
size: "3330"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 7f553e8bbc897571642d836b31eaf6ecbe395d7641c2b24291356ed28f3f2bd0
repoDigests:
- docker.io/library/nginx@sha256:396c6e925f28fbbed95a475d27c18886289c2bbc53231534dc86c163558b5e4b
- docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0
repoTags:
- docker.io/library/nginx:latest
size: "195818028"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-917108 image ls --format yaml --alsologtostderr:
I1014 13:54:05.745467   24929 out.go:345] Setting OutFile to fd 1 ...
I1014 13:54:05.745685   24929 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 13:54:05.745693   24929 out.go:358] Setting ErrFile to fd 2...
I1014 13:54:05.745698   24929 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 13:54:05.745920   24929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
I1014 13:54:05.746543   24929 config.go:182] Loaded profile config "functional-917108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1014 13:54:05.746662   24929 config.go:182] Loaded profile config "functional-917108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1014 13:54:05.747052   24929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1014 13:54:05.747094   24929 main.go:141] libmachine: Launching plugin server for driver kvm2
I1014 13:54:05.762934   24929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39351
I1014 13:54:05.763525   24929 main.go:141] libmachine: () Calling .GetVersion
I1014 13:54:05.764102   24929 main.go:141] libmachine: Using API Version  1
I1014 13:54:05.764125   24929 main.go:141] libmachine: () Calling .SetConfigRaw
I1014 13:54:05.764519   24929 main.go:141] libmachine: () Calling .GetMachineName
I1014 13:54:05.764744   24929 main.go:141] libmachine: (functional-917108) Calling .GetState
I1014 13:54:05.766904   24929 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1014 13:54:05.766948   24929 main.go:141] libmachine: Launching plugin server for driver kvm2
I1014 13:54:05.782104   24929 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41297
I1014 13:54:05.782575   24929 main.go:141] libmachine: () Calling .GetVersion
I1014 13:54:05.783102   24929 main.go:141] libmachine: Using API Version  1
I1014 13:54:05.783132   24929 main.go:141] libmachine: () Calling .SetConfigRaw
I1014 13:54:05.783493   24929 main.go:141] libmachine: () Calling .GetMachineName
I1014 13:54:05.783732   24929 main.go:141] libmachine: (functional-917108) Calling .DriverName
I1014 13:54:05.783975   24929 ssh_runner.go:195] Run: systemctl --version
I1014 13:54:05.784006   24929 main.go:141] libmachine: (functional-917108) Calling .GetSSHHostname
I1014 13:54:05.787013   24929 main.go:141] libmachine: (functional-917108) DBG | domain functional-917108 has defined MAC address 52:54:00:8b:bc:da in network mk-functional-917108
I1014 13:54:05.787376   24929 main.go:141] libmachine: (functional-917108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:bc:da", ip: ""} in network mk-functional-917108: {Iface:virbr1 ExpiryTime:2024-10-14 14:50:40 +0000 UTC Type:0 Mac:52:54:00:8b:bc:da Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:functional-917108 Clientid:01:52:54:00:8b:bc:da}
I1014 13:54:05.787414   24929 main.go:141] libmachine: (functional-917108) DBG | domain functional-917108 has defined IP address 192.168.39.149 and MAC address 52:54:00:8b:bc:da in network mk-functional-917108
I1014 13:54:05.787542   24929 main.go:141] libmachine: (functional-917108) Calling .GetSSHPort
I1014 13:54:05.787699   24929 main.go:141] libmachine: (functional-917108) Calling .GetSSHKeyPath
I1014 13:54:05.787853   24929 main.go:141] libmachine: (functional-917108) Calling .GetSSHUsername
I1014 13:54:05.787983   24929 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/functional-917108/id_rsa Username:docker}
I1014 13:54:05.891720   24929 ssh_runner.go:195] Run: sudo crictl images --output json
I1014 13:54:05.945753   24929 main.go:141] libmachine: Making call to close driver server
I1014 13:54:05.945765   24929 main.go:141] libmachine: (functional-917108) Calling .Close
I1014 13:54:05.946021   24929 main.go:141] libmachine: (functional-917108) DBG | Closing plugin on server side
I1014 13:54:05.946047   24929 main.go:141] libmachine: Successfully made call to close driver server
I1014 13:54:05.946081   24929 main.go:141] libmachine: Making call to close connection to plugin binary
I1014 13:54:05.946095   24929 main.go:141] libmachine: Making call to close driver server
I1014 13:54:05.946103   24929 main.go:141] libmachine: (functional-917108) Calling .Close
I1014 13:54:05.946317   24929 main.go:141] libmachine: Successfully made call to close driver server
I1014 13:54:05.946334   24929 main.go:141] libmachine: Making call to close connection to plugin binary
I1014 13:54:05.946353   24929 main.go:141] libmachine: (functional-917108) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-917108 ssh pgrep buildkitd: exit status 1 (203.489784ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 image build -t localhost/my-image:functional-917108 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-917108 image build -t localhost/my-image:functional-917108 testdata/build --alsologtostderr: (5.491563243s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-917108 image build -t localhost/my-image:functional-917108 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 1f284ae02ee
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-917108
--> f9a812a6a35
Successfully tagged localhost/my-image:functional-917108
f9a812a6a356a8f6fe361cebb1bfbe00e227e8347db8699f31fc7682999d3c71
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-917108 image build -t localhost/my-image:functional-917108 testdata/build --alsologtostderr:
I1014 13:54:06.198362   24983 out.go:345] Setting OutFile to fd 1 ...
I1014 13:54:06.198519   24983 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 13:54:06.198530   24983 out.go:358] Setting ErrFile to fd 2...
I1014 13:54:06.198537   24983 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 13:54:06.198778   24983 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
I1014 13:54:06.199344   24983 config.go:182] Loaded profile config "functional-917108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1014 13:54:06.199896   24983 config.go:182] Loaded profile config "functional-917108": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1014 13:54:06.200283   24983 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1014 13:54:06.200324   24983 main.go:141] libmachine: Launching plugin server for driver kvm2
I1014 13:54:06.215338   24983 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38747
I1014 13:54:06.215866   24983 main.go:141] libmachine: () Calling .GetVersion
I1014 13:54:06.216393   24983 main.go:141] libmachine: Using API Version  1
I1014 13:54:06.216413   24983 main.go:141] libmachine: () Calling .SetConfigRaw
I1014 13:54:06.216774   24983 main.go:141] libmachine: () Calling .GetMachineName
I1014 13:54:06.216990   24983 main.go:141] libmachine: (functional-917108) Calling .GetState
I1014 13:54:06.218899   24983 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1014 13:54:06.218942   24983 main.go:141] libmachine: Launching plugin server for driver kvm2
I1014 13:54:06.233974   24983 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44533
I1014 13:54:06.234406   24983 main.go:141] libmachine: () Calling .GetVersion
I1014 13:54:06.234873   24983 main.go:141] libmachine: Using API Version  1
I1014 13:54:06.234904   24983 main.go:141] libmachine: () Calling .SetConfigRaw
I1014 13:54:06.235273   24983 main.go:141] libmachine: () Calling .GetMachineName
I1014 13:54:06.235495   24983 main.go:141] libmachine: (functional-917108) Calling .DriverName
I1014 13:54:06.235701   24983 ssh_runner.go:195] Run: systemctl --version
I1014 13:54:06.235721   24983 main.go:141] libmachine: (functional-917108) Calling .GetSSHHostname
I1014 13:54:06.238667   24983 main.go:141] libmachine: (functional-917108) DBG | domain functional-917108 has defined MAC address 52:54:00:8b:bc:da in network mk-functional-917108
I1014 13:54:06.239077   24983 main.go:141] libmachine: (functional-917108) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:bc:da", ip: ""} in network mk-functional-917108: {Iface:virbr1 ExpiryTime:2024-10-14 14:50:40 +0000 UTC Type:0 Mac:52:54:00:8b:bc:da Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:functional-917108 Clientid:01:52:54:00:8b:bc:da}
I1014 13:54:06.239097   24983 main.go:141] libmachine: (functional-917108) DBG | domain functional-917108 has defined IP address 192.168.39.149 and MAC address 52:54:00:8b:bc:da in network mk-functional-917108
I1014 13:54:06.239248   24983 main.go:141] libmachine: (functional-917108) Calling .GetSSHPort
I1014 13:54:06.239396   24983 main.go:141] libmachine: (functional-917108) Calling .GetSSHKeyPath
I1014 13:54:06.239554   24983 main.go:141] libmachine: (functional-917108) Calling .GetSSHUsername
I1014 13:54:06.239714   24983 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/functional-917108/id_rsa Username:docker}
I1014 13:54:06.345735   24983 build_images.go:161] Building image from path: /tmp/build.1133802919.tar
I1014 13:54:06.345794   24983 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1014 13:54:06.373141   24983 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1133802919.tar
I1014 13:54:06.382390   24983 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1133802919.tar: stat -c "%s %y" /var/lib/minikube/build/build.1133802919.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1133802919.tar': No such file or directory
I1014 13:54:06.382427   24983 ssh_runner.go:362] scp /tmp/build.1133802919.tar --> /var/lib/minikube/build/build.1133802919.tar (3072 bytes)
I1014 13:54:06.445017   24983 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1133802919
I1014 13:54:06.468734   24983 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1133802919 -xf /var/lib/minikube/build/build.1133802919.tar
I1014 13:54:06.490187   24983 crio.go:315] Building image: /var/lib/minikube/build/build.1133802919
I1014 13:54:06.490293   24983 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-917108 /var/lib/minikube/build/build.1133802919 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1014 13:54:11.606504   24983 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-917108 /var/lib/minikube/build/build.1133802919 --cgroup-manager=cgroupfs: (5.116186039s)
I1014 13:54:11.606572   24983 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1133802919
I1014 13:54:11.627545   24983 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1133802919.tar
I1014 13:54:11.640684   24983 build_images.go:217] Built localhost/my-image:functional-917108 from /tmp/build.1133802919.tar
I1014 13:54:11.640714   24983 build_images.go:133] succeeded building to: functional-917108
I1014 13:54:11.640718   24983 build_images.go:134] failed building to: 
I1014 13:54:11.640734   24983 main.go:141] libmachine: Making call to close driver server
I1014 13:54:11.640744   24983 main.go:141] libmachine: (functional-917108) Calling .Close
I1014 13:54:11.640987   24983 main.go:141] libmachine: Successfully made call to close driver server
I1014 13:54:11.641009   24983 main.go:141] libmachine: Making call to close connection to plugin binary
I1014 13:54:11.641017   24983 main.go:141] libmachine: Making call to close driver server
I1014 13:54:11.641024   24983 main.go:141] libmachine: (functional-917108) Calling .Close
I1014 13:54:11.640994   24983 main.go:141] libmachine: (functional-917108) DBG | Closing plugin on server side
I1014 13:54:11.641226   24983 main.go:141] libmachine: (functional-917108) DBG | Closing plugin on server side
I1014 13:54:11.641267   24983 main.go:141] libmachine: Successfully made call to close driver server
I1014 13:54:11.641284   24983 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-917108
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 image load --daemon kicbase/echo-server:functional-917108 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-917108 image load --daemon kicbase/echo-server:functional-917108 --alsologtostderr: (1.464585463s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 image load --daemon kicbase/echo-server:functional-917108 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-917108
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 image load --daemon kicbase/echo-server:functional-917108 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 image save kicbase/echo-server:functional-917108 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-917108 /tmp/TestFunctionalparallelMountCmdspecific-port657852889/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-917108 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (233.513999ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1014 13:54:00.642112   15023 retry.go:31] will retry after 509.033231ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-917108 /tmp/TestFunctionalparallelMountCmdspecific-port657852889/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-917108 ssh "sudo umount -f /mount-9p": exit status 1 (215.535782ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-917108 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-917108 /tmp/TestFunctionalparallelMountCmdspecific-port657852889/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-917108 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2980913196/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-917108 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2980913196/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-917108 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2980913196/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-917108 ssh "findmnt -T" /mount1: exit status 1 (257.146061ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1014 13:54:02.467973   15023 retry.go:31] will retry after 595.302879ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-917108 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-917108 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-917108 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2980913196/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-917108 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2980913196/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-917108 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2980913196/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.52s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-917108
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-917108
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-917108
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (196.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-450021 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1014 13:56:06.401635   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:56:34.103071   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-450021 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m15.519555561s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (196.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-450021 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-450021 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-450021 -- rollout status deployment/busybox: (3.742253241s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-450021 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-450021 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-450021 -- exec busybox-7dff88458-fkz82 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-450021 -- exec busybox-7dff88458-lrvnn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-450021 -- exec busybox-7dff88458-nt6q5 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-450021 -- exec busybox-7dff88458-fkz82 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-450021 -- exec busybox-7dff88458-lrvnn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-450021 -- exec busybox-7dff88458-nt6q5 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-450021 -- exec busybox-7dff88458-fkz82 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-450021 -- exec busybox-7dff88458-lrvnn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-450021 -- exec busybox-7dff88458-nt6q5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-450021 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-450021 -- exec busybox-7dff88458-fkz82 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-450021 -- exec busybox-7dff88458-fkz82 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-450021 -- exec busybox-7dff88458-lrvnn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-450021 -- exec busybox-7dff88458-lrvnn -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-450021 -- exec busybox-7dff88458-nt6q5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-450021 -- exec busybox-7dff88458-nt6q5 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (56.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-450021 -v=7 --alsologtostderr
E1014 13:58:36.994504   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/functional-917108/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:58:37.000923   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/functional-917108/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:58:37.012311   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/functional-917108/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:58:37.033799   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/functional-917108/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:58:37.075525   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/functional-917108/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:58:37.156936   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/functional-917108/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:58:37.319143   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/functional-917108/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:58:37.641131   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/functional-917108/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:58:38.283170   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/functional-917108/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-450021 -v=7 --alsologtostderr: (55.829848144s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 status -v=7 --alsologtostderr
E1014 13:58:39.565324   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/functional-917108/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (56.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-450021 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 cp testdata/cp-test.txt ha-450021:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 ssh -n ha-450021 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 cp ha-450021:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3029314565/001/cp-test_ha-450021.txt
E1014 13:58:42.127381   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/functional-917108/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 ssh -n ha-450021 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 cp ha-450021:/home/docker/cp-test.txt ha-450021-m02:/home/docker/cp-test_ha-450021_ha-450021-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 ssh -n ha-450021 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 ssh -n ha-450021-m02 "sudo cat /home/docker/cp-test_ha-450021_ha-450021-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 cp ha-450021:/home/docker/cp-test.txt ha-450021-m03:/home/docker/cp-test_ha-450021_ha-450021-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 ssh -n ha-450021 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 ssh -n ha-450021-m03 "sudo cat /home/docker/cp-test_ha-450021_ha-450021-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 cp ha-450021:/home/docker/cp-test.txt ha-450021-m04:/home/docker/cp-test_ha-450021_ha-450021-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 ssh -n ha-450021 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 ssh -n ha-450021-m04 "sudo cat /home/docker/cp-test_ha-450021_ha-450021-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 cp testdata/cp-test.txt ha-450021-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 ssh -n ha-450021-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 cp ha-450021-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3029314565/001/cp-test_ha-450021-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 ssh -n ha-450021-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 cp ha-450021-m02:/home/docker/cp-test.txt ha-450021:/home/docker/cp-test_ha-450021-m02_ha-450021.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 ssh -n ha-450021-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 ssh -n ha-450021 "sudo cat /home/docker/cp-test_ha-450021-m02_ha-450021.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 cp ha-450021-m02:/home/docker/cp-test.txt ha-450021-m03:/home/docker/cp-test_ha-450021-m02_ha-450021-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 ssh -n ha-450021-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 ssh -n ha-450021-m03 "sudo cat /home/docker/cp-test_ha-450021-m02_ha-450021-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 cp ha-450021-m02:/home/docker/cp-test.txt ha-450021-m04:/home/docker/cp-test_ha-450021-m02_ha-450021-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 ssh -n ha-450021-m02 "sudo cat /home/docker/cp-test.txt"
E1014 13:58:47.249326   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/functional-917108/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 ssh -n ha-450021-m04 "sudo cat /home/docker/cp-test_ha-450021-m02_ha-450021-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 cp testdata/cp-test.txt ha-450021-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 ssh -n ha-450021-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 cp ha-450021-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3029314565/001/cp-test_ha-450021-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 ssh -n ha-450021-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 cp ha-450021-m03:/home/docker/cp-test.txt ha-450021:/home/docker/cp-test_ha-450021-m03_ha-450021.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 ssh -n ha-450021-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 ssh -n ha-450021 "sudo cat /home/docker/cp-test_ha-450021-m03_ha-450021.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 cp ha-450021-m03:/home/docker/cp-test.txt ha-450021-m02:/home/docker/cp-test_ha-450021-m03_ha-450021-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 ssh -n ha-450021-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 ssh -n ha-450021-m02 "sudo cat /home/docker/cp-test_ha-450021-m03_ha-450021-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 cp ha-450021-m03:/home/docker/cp-test.txt ha-450021-m04:/home/docker/cp-test_ha-450021-m03_ha-450021-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 ssh -n ha-450021-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 ssh -n ha-450021-m04 "sudo cat /home/docker/cp-test_ha-450021-m03_ha-450021-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 cp testdata/cp-test.txt ha-450021-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 ssh -n ha-450021-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 cp ha-450021-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3029314565/001/cp-test_ha-450021-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 ssh -n ha-450021-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 cp ha-450021-m04:/home/docker/cp-test.txt ha-450021:/home/docker/cp-test_ha-450021-m04_ha-450021.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 ssh -n ha-450021-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 ssh -n ha-450021 "sudo cat /home/docker/cp-test_ha-450021-m04_ha-450021.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 cp ha-450021-m04:/home/docker/cp-test.txt ha-450021-m02:/home/docker/cp-test_ha-450021-m04_ha-450021-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 ssh -n ha-450021-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 ssh -n ha-450021-m02 "sudo cat /home/docker/cp-test_ha-450021-m04_ha-450021-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 cp ha-450021-m04:/home/docker/cp-test.txt ha-450021-m03:/home/docker/cp-test_ha-450021-m04_ha-450021-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 ssh -n ha-450021-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 ssh -n ha-450021-m03 "sudo cat /home/docker/cp-test_ha-450021-m04_ha-450021-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 node delete m03 -v=7 --alsologtostderr
E1014 14:08:36.994318   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/functional-917108/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-450021 node delete m03 -v=7 --alsologtostderr: (16.168937893s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (347.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-450021 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1014 14:13:36.994246   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/functional-917108/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:15:00.065471   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/functional-917108/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:16:06.401056   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-450021 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m46.639971517s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (347.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-450021 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-450021 --control-plane -v=7 --alsologtostderr: (1m14.901720409s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-450021 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (75.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.85s)

                                                
                                    
x
+
TestJSONOutput/start/Command (53.68s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-437798 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E1014 14:18:36.993919   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/functional-917108/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-437798 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (53.681864146s)
--- PASS: TestJSONOutput/start/Command (53.68s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-437798 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-437798 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-437798 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-437798 --output=json --user=testUser: (7.351868743s)
--- PASS: TestJSONOutput/stop/Command (7.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-772078 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-772078 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (65.929411ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1d12cbe3-5396-4a89-beaa-906a8f0ed3b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-772078] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"912e9b23-d4f9-4446-8cca-72e807944983","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19790"}}
	{"specversion":"1.0","id":"c5b8a62d-220c-4af3-b611-bfb85cf1d6b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3e592443-3951-4e82-9829-50c26b81d988","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19790-7836/kubeconfig"}}
	{"specversion":"1.0","id":"2b86b154-7884-4b93-82ca-ac2446404a22","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-7836/.minikube"}}
	{"specversion":"1.0","id":"a3259931-dcc6-427a-8668-54c2cbc97b4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"ffc12113-c398-4431-92f3-a7cb23f88355","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f4ed532e-2e40-41ea-9154-7eb77eceeb31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-772078" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-772078
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (86.01s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-404363 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-404363 --driver=kvm2  --container-runtime=crio: (42.570375253s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-417251 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-417251 --driver=kvm2  --container-runtime=crio: (40.620950173s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-404363
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-417251
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-417251" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-417251
helpers_test.go:175: Cleaning up "first-404363" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-404363
--- PASS: TestMinikubeProfile (86.01s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.97s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-972107 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1014 14:21:06.400904   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-972107 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.971575581s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-972107 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-972107 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.37s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.39s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-990236 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-990236 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.394243569s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-990236 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-990236 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-972107 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-990236 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-990236 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-990236
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-990236: (1.268675198s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.73s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-990236
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-990236: (21.730711068s)
--- PASS: TestMountStart/serial/RestartStopped (22.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-990236 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-990236 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (111.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-740856 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1014 14:23:36.994907   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/functional-917108/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-740856 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m51.09158269s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (111.52s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-740856 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-740856 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-740856 -- rollout status deployment/busybox: (3.539899425s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-740856 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-740856 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-740856 -- exec busybox-7dff88458-tlz6j -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-740856 -- exec busybox-7dff88458-wvl6s -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-740856 -- exec busybox-7dff88458-tlz6j -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-740856 -- exec busybox-7dff88458-wvl6s -- nslookup kubernetes.default
E1014 14:24:09.468117   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-740856 -- exec busybox-7dff88458-tlz6j -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-740856 -- exec busybox-7dff88458-wvl6s -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.01s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-740856 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-740856 -- exec busybox-7dff88458-tlz6j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-740856 -- exec busybox-7dff88458-tlz6j -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-740856 -- exec busybox-7dff88458-wvl6s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-740856 -- exec busybox-7dff88458-wvl6s -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (46.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-740856 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-740856 -v 3 --alsologtostderr: (46.385237022s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (46.95s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-740856 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.58s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 cp testdata/cp-test.txt multinode-740856:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 ssh -n multinode-740856 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 cp multinode-740856:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1440328619/001/cp-test_multinode-740856.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 ssh -n multinode-740856 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 cp multinode-740856:/home/docker/cp-test.txt multinode-740856-m02:/home/docker/cp-test_multinode-740856_multinode-740856-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 ssh -n multinode-740856 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 ssh -n multinode-740856-m02 "sudo cat /home/docker/cp-test_multinode-740856_multinode-740856-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 cp multinode-740856:/home/docker/cp-test.txt multinode-740856-m03:/home/docker/cp-test_multinode-740856_multinode-740856-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 ssh -n multinode-740856 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 ssh -n multinode-740856-m03 "sudo cat /home/docker/cp-test_multinode-740856_multinode-740856-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 cp testdata/cp-test.txt multinode-740856-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 ssh -n multinode-740856-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 cp multinode-740856-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1440328619/001/cp-test_multinode-740856-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 ssh -n multinode-740856-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 cp multinode-740856-m02:/home/docker/cp-test.txt multinode-740856:/home/docker/cp-test_multinode-740856-m02_multinode-740856.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 ssh -n multinode-740856-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 ssh -n multinode-740856 "sudo cat /home/docker/cp-test_multinode-740856-m02_multinode-740856.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 cp multinode-740856-m02:/home/docker/cp-test.txt multinode-740856-m03:/home/docker/cp-test_multinode-740856-m02_multinode-740856-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 ssh -n multinode-740856-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 ssh -n multinode-740856-m03 "sudo cat /home/docker/cp-test_multinode-740856-m02_multinode-740856-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 cp testdata/cp-test.txt multinode-740856-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 ssh -n multinode-740856-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 cp multinode-740856-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1440328619/001/cp-test_multinode-740856-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 ssh -n multinode-740856-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 cp multinode-740856-m03:/home/docker/cp-test.txt multinode-740856:/home/docker/cp-test_multinode-740856-m03_multinode-740856.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 ssh -n multinode-740856-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 ssh -n multinode-740856 "sudo cat /home/docker/cp-test_multinode-740856-m03_multinode-740856.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 cp multinode-740856-m03:/home/docker/cp-test.txt multinode-740856-m02:/home/docker/cp-test_multinode-740856-m03_multinode-740856-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 ssh -n multinode-740856-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 ssh -n multinode-740856-m02 "sudo cat /home/docker/cp-test_multinode-740856-m03_multinode-740856-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.15s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-740856 node stop m03: (1.467892683s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-740856 status: exit status 7 (425.279837ms)

                                                
                                                
-- stdout --
	multinode-740856
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-740856-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-740856-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-740856 status --alsologtostderr: exit status 7 (417.644687ms)

                                                
                                                
-- stdout --
	multinode-740856
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-740856-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-740856-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 14:25:07.479307   42441 out.go:345] Setting OutFile to fd 1 ...
	I1014 14:25:07.479417   42441 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:25:07.479426   42441 out.go:358] Setting ErrFile to fd 2...
	I1014 14:25:07.479431   42441 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:25:07.479615   42441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
	I1014 14:25:07.479769   42441 out.go:352] Setting JSON to false
	I1014 14:25:07.479794   42441 mustload.go:65] Loading cluster: multinode-740856
	I1014 14:25:07.479845   42441 notify.go:220] Checking for updates...
	I1014 14:25:07.480361   42441 config.go:182] Loaded profile config "multinode-740856": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 14:25:07.480387   42441 status.go:174] checking status of multinode-740856 ...
	I1014 14:25:07.480920   42441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:25:07.480973   42441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:25:07.496862   42441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38201
	I1014 14:25:07.497221   42441 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:25:07.497818   42441 main.go:141] libmachine: Using API Version  1
	I1014 14:25:07.497843   42441 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:25:07.498157   42441 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:25:07.498338   42441 main.go:141] libmachine: (multinode-740856) Calling .GetState
	I1014 14:25:07.499968   42441 status.go:371] multinode-740856 host status = "Running" (err=<nil>)
	I1014 14:25:07.499986   42441 host.go:66] Checking if "multinode-740856" exists ...
	I1014 14:25:07.500268   42441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:25:07.500300   42441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:25:07.515607   42441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36713
	I1014 14:25:07.516094   42441 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:25:07.516547   42441 main.go:141] libmachine: Using API Version  1
	I1014 14:25:07.516566   42441 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:25:07.516916   42441 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:25:07.517080   42441 main.go:141] libmachine: (multinode-740856) Calling .GetIP
	I1014 14:25:07.520035   42441 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:25:07.520450   42441 main.go:141] libmachine: (multinode-740856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cf:00", ip: ""} in network mk-multinode-740856: {Iface:virbr1 ExpiryTime:2024-10-14 15:22:28 +0000 UTC Type:0 Mac:52:54:00:75:cf:00 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-740856 Clientid:01:52:54:00:75:cf:00}
	I1014 14:25:07.520482   42441 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined IP address 192.168.39.46 and MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:25:07.520563   42441 host.go:66] Checking if "multinode-740856" exists ...
	I1014 14:25:07.520952   42441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:25:07.520997   42441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:25:07.536899   42441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45505
	I1014 14:25:07.537236   42441 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:25:07.537649   42441 main.go:141] libmachine: Using API Version  1
	I1014 14:25:07.537667   42441 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:25:07.537944   42441 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:25:07.538112   42441 main.go:141] libmachine: (multinode-740856) Calling .DriverName
	I1014 14:25:07.538282   42441 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 14:25:07.538316   42441 main.go:141] libmachine: (multinode-740856) Calling .GetSSHHostname
	I1014 14:25:07.540953   42441 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:25:07.541327   42441 main.go:141] libmachine: (multinode-740856) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:75:cf:00", ip: ""} in network mk-multinode-740856: {Iface:virbr1 ExpiryTime:2024-10-14 15:22:28 +0000 UTC Type:0 Mac:52:54:00:75:cf:00 Iaid: IPaddr:192.168.39.46 Prefix:24 Hostname:multinode-740856 Clientid:01:52:54:00:75:cf:00}
	I1014 14:25:07.541347   42441 main.go:141] libmachine: (multinode-740856) DBG | domain multinode-740856 has defined IP address 192.168.39.46 and MAC address 52:54:00:75:cf:00 in network mk-multinode-740856
	I1014 14:25:07.541479   42441 main.go:141] libmachine: (multinode-740856) Calling .GetSSHPort
	I1014 14:25:07.541620   42441 main.go:141] libmachine: (multinode-740856) Calling .GetSSHKeyPath
	I1014 14:25:07.541737   42441 main.go:141] libmachine: (multinode-740856) Calling .GetSSHUsername
	I1014 14:25:07.541851   42441 sshutil.go:53] new ssh client: &{IP:192.168.39.46 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/multinode-740856/id_rsa Username:docker}
	I1014 14:25:07.621763   42441 ssh_runner.go:195] Run: systemctl --version
	I1014 14:25:07.627587   42441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 14:25:07.642320   42441 kubeconfig.go:125] found "multinode-740856" server: "https://192.168.39.46:8443"
	I1014 14:25:07.642353   42441 api_server.go:166] Checking apiserver status ...
	I1014 14:25:07.642384   42441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 14:25:07.655961   42441 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1046/cgroup
	W1014 14:25:07.666352   42441 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1046/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1014 14:25:07.666414   42441 ssh_runner.go:195] Run: ls
	I1014 14:25:07.671253   42441 api_server.go:253] Checking apiserver healthz at https://192.168.39.46:8443/healthz ...
	I1014 14:25:07.675460   42441 api_server.go:279] https://192.168.39.46:8443/healthz returned 200:
	ok
	I1014 14:25:07.675480   42441 status.go:463] multinode-740856 apiserver status = Running (err=<nil>)
	I1014 14:25:07.675491   42441 status.go:176] multinode-740856 status: &{Name:multinode-740856 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1014 14:25:07.675509   42441 status.go:174] checking status of multinode-740856-m02 ...
	I1014 14:25:07.675890   42441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:25:07.675934   42441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:25:07.690930   42441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33227
	I1014 14:25:07.691411   42441 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:25:07.691872   42441 main.go:141] libmachine: Using API Version  1
	I1014 14:25:07.691894   42441 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:25:07.692221   42441 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:25:07.692379   42441 main.go:141] libmachine: (multinode-740856-m02) Calling .GetState
	I1014 14:25:07.693871   42441 status.go:371] multinode-740856-m02 host status = "Running" (err=<nil>)
	I1014 14:25:07.693888   42441 host.go:66] Checking if "multinode-740856-m02" exists ...
	I1014 14:25:07.694265   42441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:25:07.694307   42441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:25:07.709905   42441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33885
	I1014 14:25:07.710302   42441 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:25:07.710834   42441 main.go:141] libmachine: Using API Version  1
	I1014 14:25:07.710855   42441 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:25:07.711209   42441 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:25:07.711370   42441 main.go:141] libmachine: (multinode-740856-m02) Calling .GetIP
	I1014 14:25:07.714155   42441 main.go:141] libmachine: (multinode-740856-m02) DBG | domain multinode-740856-m02 has defined MAC address 52:54:00:a4:68:2d in network mk-multinode-740856
	I1014 14:25:07.714570   42441 main.go:141] libmachine: (multinode-740856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:68:2d", ip: ""} in network mk-multinode-740856: {Iface:virbr1 ExpiryTime:2024-10-14 15:23:30 +0000 UTC Type:0 Mac:52:54:00:a4:68:2d Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-740856-m02 Clientid:01:52:54:00:a4:68:2d}
	I1014 14:25:07.714628   42441 main.go:141] libmachine: (multinode-740856-m02) DBG | domain multinode-740856-m02 has defined IP address 192.168.39.81 and MAC address 52:54:00:a4:68:2d in network mk-multinode-740856
	I1014 14:25:07.714767   42441 host.go:66] Checking if "multinode-740856-m02" exists ...
	I1014 14:25:07.715181   42441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:25:07.715227   42441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:25:07.730139   42441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37791
	I1014 14:25:07.730565   42441 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:25:07.731047   42441 main.go:141] libmachine: Using API Version  1
	I1014 14:25:07.731073   42441 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:25:07.731369   42441 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:25:07.731528   42441 main.go:141] libmachine: (multinode-740856-m02) Calling .DriverName
	I1014 14:25:07.731670   42441 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 14:25:07.731688   42441 main.go:141] libmachine: (multinode-740856-m02) Calling .GetSSHHostname
	I1014 14:25:07.734251   42441 main.go:141] libmachine: (multinode-740856-m02) DBG | domain multinode-740856-m02 has defined MAC address 52:54:00:a4:68:2d in network mk-multinode-740856
	I1014 14:25:07.734696   42441 main.go:141] libmachine: (multinode-740856-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a4:68:2d", ip: ""} in network mk-multinode-740856: {Iface:virbr1 ExpiryTime:2024-10-14 15:23:30 +0000 UTC Type:0 Mac:52:54:00:a4:68:2d Iaid: IPaddr:192.168.39.81 Prefix:24 Hostname:multinode-740856-m02 Clientid:01:52:54:00:a4:68:2d}
	I1014 14:25:07.734721   42441 main.go:141] libmachine: (multinode-740856-m02) DBG | domain multinode-740856-m02 has defined IP address 192.168.39.81 and MAC address 52:54:00:a4:68:2d in network mk-multinode-740856
	I1014 14:25:07.734886   42441 main.go:141] libmachine: (multinode-740856-m02) Calling .GetSSHPort
	I1014 14:25:07.735053   42441 main.go:141] libmachine: (multinode-740856-m02) Calling .GetSSHKeyPath
	I1014 14:25:07.735181   42441 main.go:141] libmachine: (multinode-740856-m02) Calling .GetSSHUsername
	I1014 14:25:07.735314   42441 sshutil.go:53] new ssh client: &{IP:192.168.39.81 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19790-7836/.minikube/machines/multinode-740856-m02/id_rsa Username:docker}
	I1014 14:25:07.817875   42441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 14:25:07.832650   42441 status.go:176] multinode-740856-m02 status: &{Name:multinode-740856-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1014 14:25:07.832686   42441 status.go:174] checking status of multinode-740856-m03 ...
	I1014 14:25:07.833046   42441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1014 14:25:07.833099   42441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1014 14:25:07.849078   42441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39961
	I1014 14:25:07.849531   42441 main.go:141] libmachine: () Calling .GetVersion
	I1014 14:25:07.850010   42441 main.go:141] libmachine: Using API Version  1
	I1014 14:25:07.850033   42441 main.go:141] libmachine: () Calling .SetConfigRaw
	I1014 14:25:07.850350   42441 main.go:141] libmachine: () Calling .GetMachineName
	I1014 14:25:07.850538   42441 main.go:141] libmachine: (multinode-740856-m03) Calling .GetState
	I1014 14:25:07.852134   42441 status.go:371] multinode-740856-m03 host status = "Stopped" (err=<nil>)
	I1014 14:25:07.852146   42441 status.go:384] host is not running, skipping remaining checks
	I1014 14:25:07.852152   42441 status.go:176] multinode-740856-m03 status: &{Name:multinode-740856-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.31s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-740856 node start m03 -v=7 --alsologtostderr: (38.646084706s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.28s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-740856 node delete m03: (1.791839594s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.31s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (174.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-740856 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1014 14:36:06.401456   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-740856 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m53.965204037s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-740856 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (174.49s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (49.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-740856
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-740856-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-740856-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (69.503151ms)

                                                
                                                
-- stdout --
	* [multinode-740856-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19790-7836/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-7836/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-740856-m02' is duplicated with machine name 'multinode-740856-m02' in profile 'multinode-740856'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-740856-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-740856-m03 --driver=kvm2  --container-runtime=crio: (48.656490749s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-740856
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-740856: exit status 80 (210.104386ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-740856 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-740856-m03 already exists in multinode-740856-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-740856-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (49.77s)

                                                
                                    
x
+
TestScheduledStopUnix (115.85s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-449282 --memory=2048 --driver=kvm2  --container-runtime=crio
E1014 14:40:49.471041   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-449282 --memory=2048 --driver=kvm2  --container-runtime=crio: (44.229718662s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-449282 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-449282 -n scheduled-stop-449282
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-449282 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1014 14:40:59.272869   15023 retry.go:31] will retry after 87.41µs: open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/scheduled-stop-449282/pid: no such file or directory
I1014 14:40:59.273984   15023 retry.go:31] will retry after 224.782µs: open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/scheduled-stop-449282/pid: no such file or directory
I1014 14:40:59.275151   15023 retry.go:31] will retry after 277.171µs: open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/scheduled-stop-449282/pid: no such file or directory
I1014 14:40:59.276286   15023 retry.go:31] will retry after 188.679µs: open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/scheduled-stop-449282/pid: no such file or directory
I1014 14:40:59.277395   15023 retry.go:31] will retry after 495.329µs: open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/scheduled-stop-449282/pid: no such file or directory
I1014 14:40:59.278495   15023 retry.go:31] will retry after 521.492µs: open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/scheduled-stop-449282/pid: no such file or directory
I1014 14:40:59.279593   15023 retry.go:31] will retry after 971.985µs: open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/scheduled-stop-449282/pid: no such file or directory
I1014 14:40:59.280688   15023 retry.go:31] will retry after 1.241506ms: open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/scheduled-stop-449282/pid: no such file or directory
I1014 14:40:59.282890   15023 retry.go:31] will retry after 2.593461ms: open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/scheduled-stop-449282/pid: no such file or directory
I1014 14:40:59.286154   15023 retry.go:31] will retry after 4.776114ms: open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/scheduled-stop-449282/pid: no such file or directory
I1014 14:40:59.291393   15023 retry.go:31] will retry after 3.741265ms: open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/scheduled-stop-449282/pid: no such file or directory
I1014 14:40:59.295640   15023 retry.go:31] will retry after 4.814003ms: open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/scheduled-stop-449282/pid: no such file or directory
I1014 14:40:59.300919   15023 retry.go:31] will retry after 11.977066ms: open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/scheduled-stop-449282/pid: no such file or directory
I1014 14:40:59.313157   15023 retry.go:31] will retry after 10.63891ms: open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/scheduled-stop-449282/pid: no such file or directory
I1014 14:40:59.324389   15023 retry.go:31] will retry after 20.001693ms: open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/scheduled-stop-449282/pid: no such file or directory
I1014 14:40:59.344629   15023 retry.go:31] will retry after 27.769398ms: open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/scheduled-stop-449282/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-449282 --cancel-scheduled
E1014 14:41:06.401458   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-449282 -n scheduled-stop-449282
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-449282
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-449282 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-449282
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-449282: exit status 7 (62.594175ms)

                                                
                                                
-- stdout --
	scheduled-stop-449282
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-449282 -n scheduled-stop-449282
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-449282 -n scheduled-stop-449282: exit status 7 (62.178916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-449282" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-449282
--- PASS: TestScheduledStopUnix (115.85s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (220.7s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2925719949 start -p running-upgrade-833927 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2925719949 start -p running-upgrade-833927 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m53.918769257s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-833927 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-833927 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m44.450139372s)
helpers_test.go:175: Cleaning up "running-upgrade-833927" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-833927
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-833927: (1.806124054s)
--- PASS: TestRunningBinaryUpgrade (220.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-229138 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-229138 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (80.904015ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-229138] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19790-7836/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-7836/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (96.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-229138 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-229138 --driver=kvm2  --container-runtime=crio: (1m35.797328391s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-229138 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (96.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-517678 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-517678 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (104.304973ms)

                                                
                                                
-- stdout --
	* [false-517678] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19790-7836/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-7836/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 14:42:13.512264   50055 out.go:345] Setting OutFile to fd 1 ...
	I1014 14:42:13.512451   50055 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:42:13.512464   50055 out.go:358] Setting ErrFile to fd 2...
	I1014 14:42:13.512472   50055 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:42:13.512779   50055 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-7836/.minikube/bin
	I1014 14:42:13.513514   50055 out.go:352] Setting JSON to false
	I1014 14:42:13.514631   50055 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":5083,"bootTime":1728911850,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1014 14:42:13.514763   50055 start.go:139] virtualization: kvm guest
	I1014 14:42:13.517159   50055 out.go:177] * [false-517678] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1014 14:42:13.518717   50055 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 14:42:13.518718   50055 notify.go:220] Checking for updates...
	I1014 14:42:13.520273   50055 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 14:42:13.521573   50055 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19790-7836/kubeconfig
	I1014 14:42:13.522837   50055 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-7836/.minikube
	I1014 14:42:13.524037   50055 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1014 14:42:13.525235   50055 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 14:42:13.526820   50055 config.go:182] Loaded profile config "NoKubernetes-229138": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 14:42:13.526947   50055 config.go:182] Loaded profile config "force-systemd-env-338682": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 14:42:13.527049   50055 config.go:182] Loaded profile config "offline-crio-190817": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1014 14:42:13.527143   50055 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 14:42:13.563717   50055 out.go:177] * Using the kvm2 driver based on user configuration
	I1014 14:42:13.565069   50055 start.go:297] selected driver: kvm2
	I1014 14:42:13.565084   50055 start.go:901] validating driver "kvm2" against <nil>
	I1014 14:42:13.565097   50055 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 14:42:13.567281   50055 out.go:201] 
	W1014 14:42:13.568723   50055 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1014 14:42:13.569998   50055 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-517678 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-517678

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-517678

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-517678

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-517678

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-517678

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-517678

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-517678

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-517678

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-517678

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-517678

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-517678"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-517678"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-517678"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-517678

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-517678"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-517678"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-517678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-517678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-517678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-517678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-517678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-517678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-517678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-517678" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-517678"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-517678"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-517678"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-517678"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-517678"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-517678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-517678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-517678" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-517678"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-517678"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-517678"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-517678"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-517678"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-517678

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-517678"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-517678"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-517678"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-517678"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-517678"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-517678"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-517678"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-517678"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-517678"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-517678"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-517678"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-517678"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-517678"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-517678"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-517678"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-517678"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-517678"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-517678"

                                                
                                                
----------------------- debugLogs end: false-517678 [took: 2.761779953s] --------------------------------
helpers_test.go:175: Cleaning up "false-517678" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-517678
--- PASS: TestNetworkPlugins/group/false (3.00s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.58s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (154.92s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2715540087 start -p stopped-upgrade-718432 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E1014 14:43:36.994245   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/functional-917108/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2715540087 start -p stopped-upgrade-718432 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m44.391591812s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2715540087 -p stopped-upgrade-718432 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2715540087 -p stopped-upgrade-718432 stop: (2.144006977s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-718432 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-718432 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (48.386845875s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (154.92s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (62.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-229138 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-229138 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m1.445588407s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-229138 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-229138 status -o json: exit status 2 (242.210722ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-229138","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-229138
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (62.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (29.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-229138 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-229138 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.969853827s)
--- PASS: TestNoKubernetes/serial/Start (29.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-229138 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-229138 "sudo systemctl is-active --quiet service kubelet": exit status 1 (198.508288ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (29.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (13.523732173s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (15.720903493s)
--- PASS: TestNoKubernetes/serial/ProfileList (29.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-229138
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-229138: (1.299054352s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (21.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-229138 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-229138 --driver=kvm2  --container-runtime=crio: (21.325499894s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (21.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-718432
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-229138 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-229138 "sudo systemctl is-active --quiet service kubelet": exit status 1 (207.039238ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestPause/serial/Start (66.08s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-329024 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-329024 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m6.075630342s)
--- PASS: TestPause/serial/Start (66.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (97.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-517678 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-517678 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m37.194411107s)
--- PASS: TestNetworkPlugins/group/auto/Start (97.19s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (51.74s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-329024 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1014 14:48:36.994086   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/functional-917108/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-329024 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (51.724603725s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (51.74s)

                                                
                                    
x
+
TestPause/serial/Pause (0.72s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-329024 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.72s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-329024 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-329024 --output=json --layout=cluster: exit status 2 (246.311643ms)

                                                
                                                
-- stdout --
	{"Name":"pause-329024","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-329024","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.25s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.63s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-329024 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.63s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.8s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-329024 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.80s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.81s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-329024 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.81s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.65s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (64.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-517678 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-517678 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m4.533286422s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (64.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-517678 "pgrep -a kubelet"
I1014 14:49:27.587529   15023 config.go:182] Loaded profile config "auto-517678": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-517678 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5bnpq" [e4887a47-dc67-4448-a17f-a93e2a3a197d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5bnpq" [e4887a47-dc67-4448-a17f-a93e2a3a197d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.005557944s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-517678 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-517678 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-517678 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (78.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-517678 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-517678 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m18.484204944s)
--- PASS: TestNetworkPlugins/group/calico/Start (78.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (85.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-517678 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-517678 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m25.702365592s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (85.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-xxw92" [44294be1-6efc-45b4-8e51-b783c1fa8eb8] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005028953s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-517678 "pgrep -a kubelet"
I1014 14:50:35.675936   15023 config.go:182] Loaded profile config "kindnet-517678": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-517678 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ncrf8" [9fad6016-fa59-4e31-ba29-e27609e12a68] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ncrf8" [9fad6016-fa59-4e31-ba29-e27609e12a68] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.210537849s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-517678 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-517678 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-517678 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (58.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-517678 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E1014 14:51:06.401505   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-517678 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (58.171774789s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (58.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-w4pfm" [d9a16f61-d1c5-4a2a-92c6-5242021ad0b2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005634785s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-517678 "pgrep -a kubelet"
I1014 14:51:13.370816   15023 config.go:182] Loaded profile config "calico-517678": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-517678 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-h6wt6" [9904b6a2-77b0-4fd0-a6ed-77d91998dc51] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-h6wt6" [9904b6a2-77b0-4fd0-a6ed-77d91998dc51] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004687948s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (78.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-517678 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-517678 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m18.007288516s)
--- PASS: TestNetworkPlugins/group/flannel/Start (78.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-517678 "pgrep -a kubelet"
I1014 14:51:21.960743   15023 config.go:182] Loaded profile config "custom-flannel-517678": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-517678 replace --force -f testdata/netcat-deployment.yaml
I1014 14:51:22.171927   15023 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-zchp7" [1d4ecefc-88b2-45b1-a1ff-6bec56b9a2c7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-zchp7" [1d4ecefc-88b2-45b1-a1ff-6bec56b9a2c7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.005651649s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-517678 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-517678 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-517678 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-517678 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-517678 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-517678 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (64.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-517678 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-517678 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m4.49596349s)
--- PASS: TestNetworkPlugins/group/bridge/Start (64.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-517678 "pgrep -a kubelet"
I1014 14:52:01.613766   15023 config.go:182] Loaded profile config "enable-default-cni-517678": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-517678 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-b4b4d" [9351e16f-abea-4f06-8904-a89a3724d577] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-b4b4d" [9351e16f-abea-4f06-8904-a89a3724d577] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004236828s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-517678 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-517678 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-517678 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (83.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-813300 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-813300 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m23.424480799s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (83.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-gdzsf" [67cc7a6e-d061-439e-a37a-04e93809b7b6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005862708s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-517678 "pgrep -a kubelet"
I1014 14:52:44.468731   15023 config.go:182] Loaded profile config "flannel-517678": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-517678 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-skqd4" [b863fe0d-1b7a-4d5e-8a8b-2dc76234eb55] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-skqd4" [b863fe0d-1b7a-4d5e-8a8b-2dc76234eb55] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004208917s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-517678 "pgrep -a kubelet"
I1014 14:52:48.863128   15023 config.go:182] Loaded profile config "bridge-517678": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-517678 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gds7z" [2056741e-ab06-42ca-87e2-780b7b3c700d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gds7z" [2056741e-ab06-42ca-87e2-780b7b3c700d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.004365769s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-517678 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-517678 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-517678 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (16.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-517678 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-517678 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.14499247s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1014 14:53:17.345906   15023 retry.go:31] will retry after 1.477006371s: exit status 1
net_test.go:175: (dbg) Run:  kubectl --context bridge-517678 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (16.78s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (93.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-989166 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-989166 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m33.280337593s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (93.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-517678 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-517678 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)
E1014 15:22:38.241145   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/flannel-517678/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (99.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-201291 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E1014 14:53:36.994733   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/functional-917108/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-201291 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (1m39.635165334s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (99.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-813300 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [270d97b3-f42b-4569-a724-6bf0683f7da4] Pending
helpers_test.go:344: "busybox" [270d97b3-f42b-4569-a724-6bf0683f7da4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [270d97b3-f42b-4569-a724-6bf0683f7da4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004093347s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-813300 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-813300 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-813300 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-989166 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e0ad56ab-7848-48ba-84dd-7eac8d2166ee] Pending
helpers_test.go:344: "busybox" [e0ad56ab-7848-48ba-84dd-7eac8d2166ee] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1014 14:54:48.289533   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/auto-517678/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [e0ad56ab-7848-48ba-84dd-7eac8d2166ee] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004744077s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-989166 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-989166 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-989166 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-201291 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [73313975-3d02-4629-9437-ec78b344b297] Pending
helpers_test.go:344: "busybox" [73313975-3d02-4629-9437-ec78b344b297] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [73313975-3d02-4629-9437-ec78b344b297] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004902086s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-201291 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-201291 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-201291 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (686.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-813300 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E1014 14:56:42.656555   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/custom-flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:56:48.130919   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/calico-517678/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-813300 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (11m25.747151185s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-813300 -n no-preload-813300
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (686.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (561.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-989166 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E1014 14:57:29.472691   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/addons-313496/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:57:38.240450   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:57:38.246780   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:57:38.258100   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:57:38.279430   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:57:38.320851   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:57:38.402259   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:57:38.563809   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:57:38.885477   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:57:39.527014   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:57:40.809291   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:57:42.812870   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/enable-default-cni-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:57:43.370894   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-989166 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (9m21.658163969s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-989166 -n embed-certs-989166
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (561.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (517.74s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-201291 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E1014 14:57:58.734013   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/flannel-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:57:59.440297   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/bridge-517678/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:58:09.682647   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/bridge-517678/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-201291 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (8m37.461760086s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-201291 -n default-k8s-diff-port-201291
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (517.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (4.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-399767 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-399767 --alsologtostderr -v=3: (4.29546531s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (4.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-399767 -n old-k8s-version-399767
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-399767 -n old-k8s-version-399767: exit status 7 (63.491103ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-399767 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-870289 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
E1014 15:22:01.836194   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/enable-default-cni-517678/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-870289 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (47.087206427s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-870289 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-870289 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.04062088s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-870289 --alsologtostderr -v=3
E1014 15:22:49.187818   15023 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-7836/.minikube/profiles/bridge-517678/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-870289 --alsologtostderr -v=3: (7.315348278s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-870289 -n newest-cni-870289
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-870289 -n newest-cni-870289: exit status 7 (63.767949ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-870289 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-870289 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-870289 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.1: (35.989254531s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-870289 -n newest-cni-870289
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-870289 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-870289 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-870289 -n newest-cni-870289
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-870289 -n newest-cni-870289: exit status 2 (230.409711ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-870289 -n newest-cni-870289
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-870289 -n newest-cni-870289: exit status 2 (248.517402ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-870289 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-870289 -n newest-cni-870289
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-870289 -n newest-cni-870289
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.37s)

                                                
                                    

Test skip (38/319)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.31.1/cached-images 0
15 TestDownloadOnly/v1.31.1/binaries 0
16 TestDownloadOnly/v1.31.1/kubectl 0
20 TestDownloadOnlyKic 0
29 TestAddons/serial/Volcano 0.31
38 TestAddons/parallel/Olm 0
45 TestAddons/parallel/AmdGpuDevicePlugin 0
49 TestDockerFlags 0
52 TestDockerEnvContainerd 0
54 TestHyperKitDriverInstallOrUpdate 0
55 TestHyperkitDriverSkipUpgrade 0
106 TestFunctional/parallel/DockerEnv 0
107 TestFunctional/parallel/PodmanEnv 0
118 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
119 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
120 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
122 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
123 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
124 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
155 TestGvisorAddon 0
177 TestImageBuild 0
204 TestKicCustomNetwork 0
205 TestKicExistingNetwork 0
206 TestKicCustomSubnet 0
207 TestKicStaticIP 0
239 TestChangeNoneUser 0
242 TestScheduledStopWindows 0
244 TestSkaffold 0
246 TestInsufficientStorage 0
250 TestMissingContainerUpgrade 0
255 TestNetworkPlugins/group/kubenet 2.93
264 TestNetworkPlugins/group/cilium 3.1
280 TestStartStop/group/disable-driver-mounts 0.15
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:785: skipping: crio not supported
addons_test.go:988: (dbg) Run:  out/minikube-linux-amd64 -p addons-313496 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.31s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:968: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (2.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-517678 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-517678

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-517678

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-517678

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-517678

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-517678

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-517678

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-517678

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-517678

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-517678

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-517678

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-517678"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-517678"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-517678"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-517678

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-517678"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-517678"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-517678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-517678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-517678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-517678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-517678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-517678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-517678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-517678" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-517678"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-517678"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-517678"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-517678"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-517678"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-517678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-517678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-517678" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-517678"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-517678"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-517678"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-517678"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-517678"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-517678

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-517678"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-517678"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-517678"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-517678"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-517678"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-517678"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-517678"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-517678"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-517678"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-517678"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-517678"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-517678"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-517678"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-517678"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-517678"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-517678"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-517678"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-517678"

                                                
                                                
----------------------- debugLogs end: kubenet-517678 [took: 2.793667559s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-517678" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-517678
--- SKIP: TestNetworkPlugins/group/kubenet (2.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-517678 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-517678

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-517678

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-517678

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-517678

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-517678

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-517678

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-517678

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-517678

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-517678

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-517678

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-517678"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-517678"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-517678"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-517678

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-517678"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-517678"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-517678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-517678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-517678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-517678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-517678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-517678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-517678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-517678" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-517678"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-517678"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-517678"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-517678"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-517678"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-517678

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-517678

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-517678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-517678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-517678

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-517678

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-517678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-517678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-517678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-517678" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-517678" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-517678"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-517678"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-517678"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-517678"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-517678"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-517678

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-517678"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-517678"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-517678"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-517678"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-517678"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-517678"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-517678"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-517678"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-517678"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-517678"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-517678"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-517678"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-517678"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-517678"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-517678"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-517678"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-517678"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-517678" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-517678"

                                                
                                                
----------------------- debugLogs end: cilium-517678 [took: 2.960546514s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-517678" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-517678
--- SKIP: TestNetworkPlugins/group/cilium (3.10s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-887610" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-887610
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard